anil

Tuesday, February 26, 2013

The real dangers ahead


In the past few months I have been following the Republican strategy for getting back into power and it is one of the most cynical and despicable strategy indeed that I have seen any political party employ. *

It rests on five pillars:

a. An attack on civil liberties seeking to overturn the Civil Rights Act of 1965 thus disenfranchising the minorities. This path lies through the Supreme Court and a look at their docket shows the gains that have been made. Witness the Blum story in Washington Post today which lays out one of the many strands at play. According to liberals he is leading a charge which could gut affirmative action and key voting rights protections for minorities and could upend decades of civil rights law. 

Derfner, who helped shape the Voting Rights Act through Supreme Court arguments in the late 1960s, exchanged polite conversation with Blum during the flight. But he left wondering: “How can a nice person be doing such awful things? The notion that the tiny infinitesimal group of circumstances in which a black person may get some favoritism .
.. is the nation’s issue when blacks are on the bottom every single day, in every single way is just insane...What people like Edward Blum are doing is ignoring reality.”


The Voting Rights Act is the Republican Party’s worst enemy because it contains mechanisms to prevent policies that will disenfranchise voters based on race.  Under Article 5 of the VRA, the worst offenders of racist voting policies must seek preclearance of proposed voting policies changes with the Department of Justice. On Wednesday, the Supreme Court will listen to arguments in Shelby County Alabama vs. Holder.  In its brief,  Shelby County claims that racial segregation and discrimination no longer exist so we really don’t need Article 5. 

b. Redestricting. This has been going on in the Republican states ever since Abrahamoff started the move some ten years ago. By deliberately cordoning off areas favorable to the Republicans, the attempt is to create rock solid red districts for a long time to come.

Republicans know that most of America rejects their increasingly extreme ideology and they can’t win a fair election.  That is at the core of their war on the voting rights in the form of gerrymandering, voter ID laws, restricting voting days and hours. The available data shows that these methods not only establish a system in which some votes carry more weight than others establish, but also seeks to disenfranchise particular groups of people: minorities, women and the working poor. 


According to Republican strategist Karl Rove "He who controls redistricting can control Congress." In 2010 state races, Republicans picked up 675 legislative seats, gaining complete control of 12 state legislatures. As a result, the GOP oversaw redrawing of lines for four times as many congressional districts as Democrats. How did they dominate redistricting? A ProPublica investigation has found that the GOP relied on opaque nonprofits funded by dark money, supposedly nonpartisan campaign outfits, and millions in corporate donations to achieve Republican-friendly maps throughout the country. Two tobacco giants, Altria and Reynolds, each pitched inmore than $1 million to the main Republican redistricting group, as did Rove's super PAC, American Crossroads; Walmart and the pharmaceutical industry also contributed. Other donors, who gave to the nonprofits Republicans created, may never have to be disclosed.
c. Restricting voting by the minorities. The most blatant example of this was in Pennsylvania and Florida where the republican leadership openly campaigned for Romney by restricting efforts of minorities to vote. Thus voting on Sundays before the elections was cancelled, duration of voting cut down and new obstacles placed through voter ID laws.

Jim Greer, the former head of the Florida Republican Party, recently claimed that a law shortening the early voting period in the state was deliberately designed to suppress voting among groups that tend to support Democratic candidates. “The Republican Party, the strategists, the consultants, they firmly believe that early voting is bad for Republican Party candidates,...It’s done for one reason and one reason only...‘We’ve got to cut down on early voting because early voting is not good for us.’"

Pennsylvania Republican House Leader Mike Turzai (R-PA) finally admitted what so many have speculated: Voter identification efforts are meant to suppress Democratic votes in this year’s election.

d. Undisclosed Money. The Citizens United case was the first shot in controlling the flow of funds to the parties. By getting all restrictions removed through the Supreme Court, the republicans in effect hoped to outspend the democrats through their deep pocketed sponsors.


The Citizens United ruling, released in January 2010, tossed out the corporate and union ban on making independent expenditures and financing electioneering communications. It gave corporations and unions the green light to spend unlimited sums on ads and other political tools, calling for the election or defeat of individual candidates. In a nutshell, the high court’s 5-4 decision said that it is OK for corporations and labor unions to spend as much as they want to convince people to vote for or against a candidate. The decision did not affect contributions. It is still illegal for companies and labor unions to give money directly to candidates for federal office. The court said that because these funds were not being spent in coordination with a campaign, they “do not give rise to corruption or the appearance of corruption.” But what was the real life effect of this ruling?

It has led to led to the creation of the super PACs, which act as shadow political parties. They accept unlimited donations from billionaires, corporations and unions and use it to buy advertising, most of it negative. So far in the 2011-2012 election cycle, super PACs have spent $378 million, while non-disclosing nonprofits have spent $171 million, at times praising, but mostly badmouthing candidates, according to figures compiled by the Center for Responsive Politics. Worse the totals now spent on presidential election are topping $ 2 billion.


e. Right wing Media. Here the work of Fox news should not be underestimated. By having a 24x7 network continuously pumping out their propaganda the republicans gain a significant advantage in shaping the public dialouge. This dialouge is further corrupted by the radio talk show hosts like Rush Limbaugh and others like him.

I realize some of the above is just the rough and tumble of politics but the sinister way that the Republicans are plotting their return to power does need to be exposed and countered. At the base the  Republicans continue to believe that they can reclaim the lucrative levers of national authority by making the country as ungovernable as possible while a Democrat is in the White House, essentially holding governance hostage until they are restored to power. Then, the Democrats are expected to behave as a docile opposition “for the good of the country” (and usually do). The “destroy Obama” game plan tracks most closely with Newt Gingrich’s strategy for undermining Bill Clinton 16 years ago. But today’s strategy also traces back to Richard Nixon’s sabotage of President Lyndon B. Johnson’s Vietnam peace talks in 1968 and Ronald Reagan’s October Surprise gambit against President Jimmy Carter’s Iran hostage negotiations in 1980. In all four cases – covering the last four Democratic presidencies – the Republicans did not behave as a loyal opposition but rather as a single-minded political enemy that viewed the White House as its birthright and Democratic control of the Executive Branch as illegitimate.

In each of the areas marked above, the average citizen and particularly the democrats need to be alive to the dangers ahead and remain active that these efforts do not undermine democracy in the country. The real danger lies in the fact that the democrats may underestimate the desperation of the republicans to gain power and treat these as isolated attempts to change the scales. The fact is that the republicans have seen the future through the looming demographics and come to the conclusion that the only way to power in the future will be through cheating and fixing the scales. Heaven hath no fury than a man denied what he thinks he is entitled to.

Thankfully Frank Rich has a better perspective:


" It’s gotten so gloomy that at the annual House Republican retreat just before Inauguration Day in January, the motivational speakers included the executive who turned around Domino’s Pizza and the first blind man to reach the top of Mount Everest. Were the GOP a television network, it would be fifth-place NBC, falling not only behind its traditional competitors but Univision. Every postelection poll, with the possible exception of any conducted in Dick Morris’s bunker, finds that voters favor the Democrats’ positions on virtually every major issue, usually by large margins: immigration reform, gun restrictions, abortion rights, gay marriage, climate change, raising the minimum wage, and the need for higher tax revenue to accompany spending cuts in any deficit-reduction plan. Given that losing hand, what’s a party to do? It’s far easier for NBC to cancel Smashthan for the GOP to give the hook to an elected official like Steve Stockman, the Texas congressman whose guest at the State of the Union was the rocker turned NRA spokesman Ted Nugent, best known for telling the president to “suck on my machine gun.” For every Todd Akin who fades, another crazy Stockman (or two) springs up. Strategies to work around the party’s entrenched liabilities have been proliferating since November 6, as Republicans desperately try to stave off the terminal Kübler-Ross stage of Acceptance.

The Republican Plan is simplicity itself: steal future elections by disenfranchising those Americans who keep rejecting the party at the polls (blacks, young people, Latinos). This strategy was hatched even before Election Day, with widespread local efforts to reinstate Jim Crow obstacles at the ballot box, from reduced voting hours to new identification requirements. After the election, a parallel scheme was revived: state laws that propose slicing and dicing the Electoral College to increase the odds that a Republican presidential candidate could win an election while losing the popular vote. Next up is the Supreme Court, ruling this term on a new challenge to the Voting Rights Act of 1965. That signature civil-rights law, born in the crucible of Martin Luther King Jr.’s incarceration in Selma, was reenacted with bipartisan unanimity in 2006 (the vote was 98-0 in the Senate, 390-33 in the House). But now that the GOP is under existential threat, the highly political chief justice, John Roberts, seems poised to do what he has to do. He’s already on record saying that “things have changed in the South”—which may come as news to the African-Americans forced to wait for hours in Florida (and elsewhere) to vote last November."

The dangers in smart technologies


Many smart technologies are heading in a disturbing direction. A number of thinkers in Silicon Valley see these technologies as a way not just to give consumers new products that they want but to push them to behave better. Sometimes this will be a nudge; sometimes it will be a shove. But the central idea is clear: social engineering is disguised as product engineering.

But Morozov says there is reason to worry about this approaching revolution. As smart technologies become more intrusive, they risk undermining our autonomy by suppressing behaviors that someone somewhere has deemed undesirable. Smart forks inform us that we are eating too fast. Smart toothbrushes urge us to spend more time brushing our teeth. Smart sensors in our cars can tell if we drive too fast or brake too suddenly.

True these devices can give us useful feedback, but they can also share everything they know about our habits with institutions whose interests are not identical with our own. Insurance companies already offer significant discounts to drivers who agree to install smart sensors in order to monitor their driving habits. How long will it be before customers can't get auto insurance without surrendering to such surveillance? And how long will it be before the self-tracking of our health (weight, diet, steps taken in a day) graduates from being a recreational novelty to a virtual requirement?

How can we avoid completely surrendering to the new technology?

The key is learning to differentiate between "good smart" and "bad smart."

Devices that are "good smart" leave us in complete control of the situation and seek to enhance our decision-making by providing more information. For example: An Internet-jacked kettle that alerts us when the national power grid is overloaded doesn't prevent us from boiling yet another cup of tea, but it does add an extra ethical dimension to that choice. Likewise, a grocery cart that can scan the bar codes of products we put into it, informing us of their nutritional benefits and country of origin, enhances—rather than impoverishes—our autonomy.

Technologies that are "bad smart," by contrast, make certain choices and behaviors impossible. Smart gadgets in the latest generation of cars—breathalyzers that can check if we are sober, steering sensors that verify if we are drowsy, facial recognition technologies that confirm we are who we say we are—seek to limit, not to expand, what we can do. This may be an acceptable price to pay in situations where lives are at stake, such as driving, but we must resist any attempt to universalize this logic.

The most worrisome smart-technology projects start from the assumption that designers know precisely how we should behave, so the only problem is finding the right incentive. A truly smart trash bin would thus make us reflect on our recycling habits and contribute to conscious deliberation—say, by letting us benchmark our usual recycling behavior against other people in our demographic. There are many other contexts in which smart technologies are unambiguously useful and even lifesaving. Smart belts that monitor the balance of the elderly and smart carpets that detect falls seem to fall in this category.

But the problem with many smart technologies is that their designers, in the quest to root out the imperfections of the human condition, seldom stop to ask how much frustration, failure and regret is required for happiness and achievement to retain any meaning.

It's great when the things around us run smoothly, but it's even better when they don't do so by default. That, after all, is how we gain the space to make decisions—many of them undoubtedly wrongheaded—and, through trial and error, to mature into responsible adults, tolerant of compromise and complexity.

To grasp the intellectual poverty that awaits us in a smart world, look no further than recent blueprints for a "smart kitchen". Once we step into this magic space, we are surrounded by video cameras that recognize whatever ingredients we hold in our hands. Tiny countertop robots inform us that, say, arugula doesn't go with boiled carrots or that lemon grass tastes awful with chocolate milk. This kitchen might be smart, but it's also a place where every mistake, every deviation from the master plan, is frowned upon. It's a world that looks more like a Taylorist factory than a place for culinary innovation. Rest assured that lasagna and sushi weren't invented by a committee armed with formulas or with "big data" about recent consumer wants.

The fact is that it is creative experimentation that propels our culture forward. Our stories of innovation tend to glorify the breakthroughs and edit out all the experimental mistakes doesn't mean that mistakes play a trivial role. As any artist or scientist knows, without some protected, even sacred space for mistakes, innovation would cease. An inventor’s path is chorused with groans, riddled with fist-banging and punctuated by head scratches. Stumbling upon the next great invention in an “ah-ha!” moment is a myth. It is only by learning from mistakes that progress is made.

A case in point is the story of James Dyson, the inventor of the famous vacuum cleaner. From cardboard and duct tape to ABS polycarbonate, it took 5,127 prototypes and 15 years to get it right. And, even then there was more work to be done. His first vacuum, DC01, went to market in 1993. They are now up to DC35 now, having improved with each iteration. It’s a never-ending process that is enormously rewarding, and endlessly frustrating. Or look at Edison who famously said, “I have not failed. I’ve just found 10,000 ways that won’t work.” Those 10,000 detours resulted in the Dictaphone, mimeograph, stock ticker, storage battery, carbon transmitter and his joint invention of the light bulb. In the end, 10,000 flops fade into insignificance alongside Edison’s 1,093 patents.

The ability to learn from mistakes — trial and error — is a valuable skill we learn early on. Recent studies show that encouraging children to learn new things on their own fosters creativity. Direct instruction leads to children being less curious and less likely to discover new things. Punishing mistakes doesn’t lead to better solutions or faster results. It stifles invention. By fostering an environment where failure is embraced, even those of us far from our student days have the freedom to make mistakes — and learn from them still. No one is going to get it right the first time. Instead of being punished for mistakes along the way, we need to learn from them. 

With "smart" technology in the ascendant, it will be even harder to resist the allure of a frictionless, problem-free future. When Eric Schmidt, Google's executive chairman, says that "people will spend less time trying to get technology to work…because it will just be seamless," he is not wrong: This is the future we're headed toward. But not all of us will want to go there. As James Dyson and Edison prove there are many a detour in the journey of innovation which may be even more important than a friction less path of discovery.

A better smart-design paradigm would happily acknowledge that the task of technology is not to liberate us from problem-solving. Rather, we need to enroll smart technology in helping us with problem-solving. What we want is not a life where friction and frustrations have been carefully designed out, but a life where we can overcome the frictions and frustrations that stand in our way.

Designers of smart technologies in the future need to take stock of the complexity and richness of the lived human experience—with its gaps, challenges and conflicts—so that their inventions will be destined for changing history. They should embrace failures on the path rather than seek to eliminate it.



Monday, February 25, 2013

It is what it is


In an essay, Goodin asserts that the phrase “it is what it is” is one of the worst in the English language since it denotes resignation, even despair, while oozing with self-congratulatory smugness. It is a fatal combination, because one really should not be self-congratulatory about despair.

It is particularly sad when people use this phrase to rationalize their own disappointment. Sure, it is usually better to recognize reality, and to move on, than to rage and to gnash one’s teeth. Still, some realities can be changed. The phrase, instead, is a formula for settling and accepting reality, and sometimes we shouldn’t settle or accept the reality. Or if we do, we should not do so without thinking that the world is tractable and that somewhere there may be a solution to our problems. Or should we as the poet Dylan Thomas urged his father "burn and rave at close of day"--rather than surrender meekly to it.


“Do not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light."

The most common use of the phrase " it is what it is" lies in the idea that we should settle for life as it is   But there is a difference between “all that you want” and “all that you deserve”—and when you settle, you may fall short of both goals. 

Actually the phrase is always awful, but it is especially despicable when used by someone in a position of authority, who invokes it to dash the hopes of someone who is in a position of supplication. Imagine, for example, an employer telling a job applicant that she isn’t going to be hired, or a bank officer telling a would-be homeowner that he isn’t getting a loan, or a criminal lawyer telling a client that he’s going to have to go to jail, all of them explaining that “it is what it is.” "In such contexts," Goodwin states that  " the phrase is no explanation at all, and its use is a denial of both responsibility and empathy."

However settling, understood in terms of fixity, does have a number of virtues. First, it helps to promote planning. One advantage of a proper settlement is that it produces not merely an end but also a secure one. If we are in the midst of a fight, we might well hope to settle, because fights are ugly and potentially dangerous. Being “unsettled” is “worse than merely being “uncertain”—it is a sort of stultifying uncertainty,” one that “stymies your planning.” A major virtue of settling is that it provides people with fixed points, enabling them to organize their lives. It can be costly to make decisions, and if we have to make decisions about everything, those costs will quickly spiral out of control. After all people have limited cognitive capacities, and therefore we must “take some things as given” so that we can decide “what to do about some other things.”

Beyond reducing cognitive burdens and promoting planning, Goodin contends that settling is indispensable to commitment, trust, and confidence. Our characters are defined by our commitments to certain projects, principles, and values; unless those commitments are settled, they are not commitments at all. Put constantly up for grabs, they cannot have an appropriate place in your life. Trust itself requires fixity. If you are not fixed in certain relationships and practices, people cannot trust you. And confidence—with respect to ourselves, others, and states of the world—will not exist without a lot of settlements.

Goodin is careful to distinguish settling from three closely related concepts with which it might be confused. First, settling is not merely a matter of compromising. When you make restitution to someone you have wronged, and thus “settle up,” you are not compromising at all. Second, settling is not necessarily conservative. You might settle on an abstract theory, and work hard to implement it, and it might involve radical reform. You might settle on a plan of life that requires you to help bring about dramatic social change. Third, settling need not be a form of resignation (or it-is-what-it-is-ism). One reason to settle is that you cannot decide everything at once, and settling frees you up to pursue other matters. You settle on X in order to pursue Y, and in pursuing Y you are anything but resigned.

True, it sometimes makes sense to switch from settling to striving. You might decide to re-open a matter that you had thought fixed. Perhaps you have won the lottery, and it is time to consider a new place to live. Perhaps nothing dramatic has happened or changed, but a set of small developments, taken cumulatively, suggest that you really should consider a new job in Boston. This last point suggests a problem, or maybe even a paradox, which is that settlements are not really rational unless people are prepared to update their commitments in light of what they learn—in which case they might not be counted as settlements at all. We also need to settle in order “to clear the decks and free up resources.” Striving requires settling.

But when should you settle? 

Economists would answer that the answer depends on two factors: the costs of decisions and the costs of errors. Suppose that you are looking for a job. You get an offer, and while it is not ideal, it is certainly not bad. If you decline the offer and keep looking, you might be able to do better, but you might also do worse, and end up with nothing at all. If you accept the offer, the costs of decision will fall to zero. The problem is that premature settlement can impose large error costs, in the form of economic and other losses. The same, of course, is true if you decide not to settle. A bird in the hand may not be worth two in the bush, but a bird in the hand is a lot better than no bird at all.

To decide whether to settle, people will need to assess the potential outcomes and their various probabilities. If you have an excellent chance of doing a lot better, you probably ought not to settle. And in making these judgments, you will be alert not only to the matter at hand, but to the range of decisions that you are facing, and hence to whether a decision to settle will make it easier to focus on more pressing matters.

What the economic literature does not sufficiently investigate are the emotional consequences of settling on the one hand and continued striving on the other. If you settle, you may end up kicking yourself, which is, well, unsettling, and corrosive. You have lost option value, which may be painful, and you might have forfeited a far better outcome, which may be worse. Settling can also produce the phenomenon of “adaptive preferences,” through which people adapt their desires to their situations. Adaptation can reduce or even eliminate distress, but if people are adapting their preferences to a bad situation, it runs into problems of its own. On the other hand, not settling can make people crazy. If you refuse to settle, you may be in a state of some anxiety, which may make it exceedingly difficult to plan and perhaps to do anything else.

Thus when all is said and done, the phrase “it is what it is” isn’t all that bad. By moving on to other concerns, we make striving possible. After all settling is not always resignation. It is a fact that no human life can do without resignation. We have to resign ourselves to that fact. But there  are choices to be made in regard to when, and how, and with what attitude do we approach the rest of our life. 

And in that I am with the poet, "“Do not go gentle into that good night,......Rage, rage against the dying of the light " before you go.



Sunday, February 17, 2013

The wisdom of the internet


 Robert Cottrell  shares with us four lessons he has learnt in five years’ drinking from the fire hose which is the internet.

His first contention: this is a great time to be a reader. The amount of good writing freely available online far exceeds what even the most dedicated consumer might have hoped to encounter a generation ago within the limits of printed media. Not everything online is great writing. Perhaps only 1 per cent is of value to the intelligent general reader but another 4 per cent of the internet counts as entertaining rubbish. The remaining 95 per cent has no redeeming features. But even the 1 per cent of writing by and for the elite is an embarrassment of riches, a horn of plenty, a garden of delights. 

Where the internet excels is in serving up plentiful writing that sits one level down: at the level of very good daily journalism, whether on subjects of immediate interest for a general audience or more esoteric subjects for a specialised audience. Where is it coming from? Some of it comes from professional journalists, writing for the websites of established publications or on their own blogs. But much of it – the great new addition to our writing and reading culture – comes from professionals in other fields who find the time, the motivation and the opportunity to write for anyone who cares to read. As a gross generalisation, academics make excellent bloggers, within and beyond their specialist fields. So, too, do aid workers, lawyers, musicians, doctors, economists, poets, financiers, engineers, publishers and computer scientists. They blog for pleasure; they blog for visibility within their field; they blog to raise their value and build their markets as authors and public speakers; they blog because their peers do. Businessmen and politicians make the worst bloggers because they do not like to tell what they know, and telling what you know is the essence of blogging well. They also fear to be wrong; and, as Felix Salmon, Reuters’ finance blogger, insists and sometimes demonstrates: “If you are never wrong, you are never interesting”. To read the blog of a political scientist, or an anthropologist, or a lawyer, or an information technologist, is the next best thing to reading their mind; better, in some ways, since what they have to say emerges in considered form. These are the experts who, a couple of decades ago, would have functioned as sources for newspaper journalists. Their opinions would emerge often mangled and simplified, always truncated, in articles over which they had no final control. Now we can read them directly, and discover what they actually think and say. We can know, for example, what lawyers are saying about a new appointment to the Supreme Court; what political scientists expect from an election; how computer scientists evaluate Apple’s updated operating system; what economists expect from a new government policy. The general reader now has access to expertise through the internet that was easily available, a decade ago, only to the insider or the specialist.

His second contention as a professional reader is one that may seem self-evident in the world of blogging but also holds good across the whole universe of online writing and publishing: the writer is everything. The corollary of this also holds good: the publisher  is nothing. Good writers write good pieces, regardless of subject and regardless of publication. Mediocre writers write mediocre pieces. And nothing at all can rescue a bad writer. A simple assertion, but put it in context and it becomes more complex and interesting. Think back to the days when print media ruled. Your basic unit of consumption was not the article, nor the writer, but the publication. You bought the publication in the hope or expectation that it would contain good writing. The publisher was the guarantor of quality. Professional writers still see value in having publishers online, not so much as guarantors of quality, but because publishers pay for writing – or, increasingly, if they do not pay for it, they do at least publish it in a place where it will get read. Readers, on the other hand, have less of a need for publishers. One striking trend is that in the past five years  individual articles uncouple themselves from the places where they are first published, to lead their own lives across the internet, passed from hand to hand between readers.

This is due, in large part, to the rise of social media – primarily Facebook and Twitter. Five years ago, you needed to visit a publisher’s website to see what was new there. Now, you hear about a particular article through Twitter or Facebook; a friend will share the link; you may visit the page directly but more probably you will save the link to your Instapaper or your Readability account, or mark it for reading later in your Flipboard feed, or on your Kindle or other reading device, and you will enjoy the piece later, probably offline. The article is what matters to the reader; the place of original publication may not even be noticed. Indeed, from a reader’s point of view, many online publishers subtract value. Let us say you have a writer who wants a reader; and a reader who wants a writer. Perfect. But if there is a publisher involved, his instincts will probably be to fill the space between reader and writer with banner advertisements, the object of which is to distract the reader from reading. It seems almost inevitable that a new business model for reading and writing online will prevail in the future, which consists of readers rewarding directly the writers they admire. Almost inevitable, because this is by far the most efficient economic arrangement for both parties, and there are no longer any significant technological obstacles to its general adoption.

And so to his third contention: we overvalue new writing, almost absurdly so, and we undervalue older writing. You never hear anybody say, “I’m not going to listen to that record because it was released last year,” or, “I’m not going to watch that film because it came out last month.” Why are we so much less interested in journalism that’s a month or a year old? The answer is that we have been on the receiving end of decades of salesmanship from the newspaper industry, telling us that today’s newspaper is essential but yesterday’s newspaper is worthless. That distinction has been increasingly bogus since newspapers lost their news-breaking role to faster media 50 years ago, and began filling their pages with more and more timeless writing. While consumers had to rely on print media, the distinction between old and new could be sustained by availability: today’s newspaper was everywhere, yesterday’s newspaper was nowhere, except perhaps in the cat litter. Online, that distinction disappears – or it should. You can call up a year-old piece as easily as you can call up a day-old piece. And yet we hardly ever do so, because we are so hardly ever prompted to do so. Which condemns tens if not hundreds of thousands of perfectly serviceable articles to sleep in writers’ and publishers’ archives, written off, never to be seen again.

Why do even big publishing groups with the resources to do so make so little attempt to organise, prioritise and monetise their archives? Think of a newspaper or magazine as a mountain of data to which a thin new layer of topsoil gets added each day or each week. Everybody sees the new soil. But what’s underneath gets covered up and forgotten. Even the people who own the mountain don’t know what’s in the lower layers. They might try to find out but that demands a whole new set of tools. And, besides, they are too busy adding the new layer of topsoil each day. Actually the wisest new hire for any long-established newspaper or magazine would be a smart, disruptive archive editor. Why just sit on a mountain of classic content, when you could be digging into it and finding buried treasure?

His fourth contention is that the internet is a force for brevity. You may think of it as a place where people witter on for ever. But when you’re writing online, you don’t have to fill an expected space or length, as you do when you write for a print publication. When you have a fixed space to fill, the temptation is to provide the minimal decent amount of original work needed, wrapped up in the maximum tolerable amount of verbiage. When you have no particular space to fill, there’s no marginal utility to be derived from going on any longer than you need to. It helps, too, that when you’re writing online, there’s no need to introduce and source every person, place and fact you mention, and no need to fill in the backstory for those new to the subject. You can link out to the source document or the related story – or just assume your reader knows how to use Google and Wikipedia.

This trend towards brevity is even more marked when it comes to books. Online publishing has spawned a new category of short books, 10,000 to 30,000 words long – Kindle Singles, Penguin Shorts, Atavist Originals and others – that give writers the space in which to turn round a big idea or a big story quickly and nimbly. Very often, 10,000 to 30,000 words is all a big idea needs, when you don’t need to bulk it out with anecdotes to justify the price of a hardback book or to make sure it still has some value when it finally gets printed in a year. You can keep your thesis lean and topical. 

Finally, the big complaint of the paper newspaper and magazine readers is that you often stumble across  articles and ideas inadvertently that you dont do when reading an ebook. That too has now been corrected by the internet by collapsing a whole range of books and magazines in one spot. You can place a software like flipchart on your ipad and get instant access to the latest in almost any area of expertise - from the latest trends in science to those in art. What the internet has now done is to multiply your capacity to acccess and absorb information from around the world and in diverse areas besides providing you links to the sources of this information as well. You can almost now drink from the hose.




Saturday, February 16, 2013

To be happy or to be creative that is the question

The tortured artist who produces a masterpiece of art or literature has been with us for as long as I can remember, Vincent Van Gogh cut off his ear to produce his masterpiece, other lived in poverty and gleaned lessons of life from their misery that found echoes in their writing. But is it true that you have to be truly miserable to produce great art or literature?


From Lord Byron to Vincent Van Gogh, society has long believed that creativity is the product of a tortured soul. 

Recent studies however, have shown that in fact, the opposite is true, and that everyday creativity is more closely linked with happiness than depression. In 2006, researchers at the University of Toronto found that sadness creates a kind of tunnel vision that closes people off from the world, but happiness makes people more open to information of all kinds. Not only are happy people more creative, but this creativity allows them to come up with new ways to solve problems or simply achieve their goals. This ability can lead to greater success or happiness, which spurs further creativity, feeding a self-perpetuating cycle in which these two qualities reinforce one another.
Mut most of these studies about happiness and creativity focus on everyday creativity, or an ability to think outside of the box, and not necessarily on great artistic creativity. Although little evidence exists to link artistic creativity and happiness, it turnes out that the myth of the depressed artist has some scientific basis. Researchers have found a slight connection between mental illness and high levels of artistic creativity. A happy person is better equipped to apply creativity to everyday problems, but a person with schizophrenia or bipolar disorder might instead be more capable of creating great art. Scientists are unsure of the exact cause, but some believe that manic periods give the artist an amplified version of the creativity experienced by happy, healthy people. Illnesses such as schizophrenia allow people to make connections or experience emotions that would not occur to people without these diseases.
Creativity is nothing peculiar to genius. Nor is suffering a precondition for it. All happy persons can be positively creative. It is not the hope of achieving fame or amassing wealth that drives the creatives, rather it is the opportunity to do the things they enjoy most. According to a Yale University computer scientist, David Gelerntner, all human beings "slide along a spectrum of thought processes" on an average day and this could begin with "high-focus" thinking where "we can sandwich many memories and pieces of knowledge and quickly extract the thing they all have in common". It is not so much creative ability as assimilative expertise aiding swift decisions and quick action. Slide along the spectrum to "low focus" and we become less good at homing in on details but our memories are more vivid, concrete and detailed". The linking of memories and knowledge is more by emotion than by reason. When we are at the work place we are in "high-focus"; when we are in love, in "low-focus". It is when people are in "middle-focus" that they are at their most creative. This is because the mind is free from both obligatory, occupational concerns and mind-numbing, un-reasoning emotions. In "middle-focus", people make unusual connections—Newton and the apple, Archimedes and the bath tub, Kekule and the two snakes, Gandhi and the railway booking in South Africa—and they acquire insights which change the course of science, art and history. Gelerntner calls this mode "unconcentration", which provides a person the right insight into things that already are in high-focus.

Creativity, however, involves more than moments of "unconcentration", relaxation and free association of unconnected thoughts. Human imagination, indeed all imagination, follows rules and thrives on constraints to provide clear-cut definitions to problems for which one seeks creative solutions. Creativity itself is undefinable. It is not originality. One of the easiest things in life is to be original and foolish. For long, creativity remained a mystery better left to poets, artists and the like. It always conjured up the messy, unverifiable world of muses, inspiration and intuition.

French mathematician Poincare identified four stages of creativity: preparation (you try to solve a problem by available, normal means), incubation (when these don't work and in frustration you move to other matters), illumination (the answer comes in a flash, when you are not looking for it), and verification (your reasoning powers re-assert and you are on the way to finding a solution). Most of us give up at the stage of incubation and miss the illumination and, consequently, the experience of creative joy. Mark Twain put it nicely: Happiness, he said, "is like the Swedish sunset. It's always there. Only, people look the other way and miss it".

There's thus a correlation between creativity and happiness. All creative persons are not happy, but all happy persons can be positively creative. They all love what they do. It is not the hope of achieving fame or amassing wealth that drives them; rather, it is the opportunity to do the things they enjoy most. They feel an inner glow and they exude it. Many people do the work they do, and many do it better, but most of them either do not enjoy it or do it as a painful duty expected of them. Or, the spur is fame, power, money, publicity, awards and honors.

I looked at my own experience in the past few months. During my very painful bout with shingles, I continued to write my blogs and columns at the usual pace. But when my son turned up to visit me last week, I have barely looked at the computer choosing instead to spend time with him. So maybe there is some truth in the saying that pain leads to creativity while happiness leads to contentment with life and no great urge to seek noble meanings in life around you. Of course, given the choice how many would choose pain?








Friday, February 8, 2013

Alternative medicine- what is the truth?

Alternative medicine has always created controversy whether it is homeopathy, aryuveda or accupuncture. Tomes have been written and lots of research carried out to demonstrate the efficacy, or lack of, of these treatments. Medical professionals trained in the western methods routinely deride alternative medicine as placebos and mumbo jumbo pills. But what it the real truth?

The fact is that anecdotal envidence goes heavily against the medical professionals. Take my own case. 


I was always a sceptic of alternative medicine till my two year old son got "bleeding eczema", a particularly painful skin disease. We tried all the western remedies without any avail till one day our pediatrist advised us to go to a homeopath for a perment cure. So we went to the preeminent practioner of homeopathy in India at that time, Dr Jugal Kishore. He took a long hard look at the skin blisters and prescribed a range of pills to be taken at very specific times. How heavily diluted pills taken at very specific times could lead to a cure is beyond my ability to decipher, but the fact was that the skin was cured and eczema has never come back.



So what is homeopathy? Homeopathy is a system of alternative medicine originated in 1796 by Samuel Hahnemann, based on his doctrine of similia similibus curentur ("like cures like"), according to which a substance that causes the symptoms of a disease in healthy people will cure similar symptoms in sick people. Hahnemann believed that the underlying cause of disease were phenomena that he termed miasms, and that homeopathic remedies addressed these. The remedies are prepared by repeatedly diluting a chosen substance in alcohol or distilled water, followed by forceful striking on an elastic body, called succussion. Each dilution followed by succussion is said to increase the remedy's potency. Dilution sometimes continues well past the point where none of the original substance remains. Homeopaths select remedies by consulting reference books known as repertories, considering the totality of the patient's symptoms as well as the patient's personal traits, physical and psychological state, and life history. The low concentration of homeopathic remedies, which often lack even a single molecule of the diluted substance, has been the basis of questions about the effects of the remedies since the 19th century. Modern advocates of homeopathy have suggested that "water has a memory" – that during mixing and succussion, the substance leaves an enduring effect on the water, perhaps a "vibration", and this produces an effect on the patient. This notion has no scientific support. Pharmacological research has found instead that stronger effects of an active ingredient come from higher, not lower doses. Homeopathic remedies are derived from substances that come from plants, minerals, or animals, such as red onion, arnica (mountain herb), crushed whole bees, white arsenic, poison ivy, belladonna (deadly nightshade), and stinging nettle. Homeopathic remedies are often formulated as sugar pellets to be placed under the tongue; they may also be in other forms, such as ointments, gels, drops, creams, and tablets. Treatments are “individualized” or tailored to each person—it is not uncommon for different people with the same condition to receive different treatments.

But scientific research has found homeopathic remedies ineffective and their postulated mechanisms of action implausible. A large portion of the scientific community regards homeopathy as a sham; the American Medical Association considers homeopathy to be quackery, and homeopathic remedies have been criticized as unethical.


Despite this scepticism, according to recent surveys in France, an astounding 40% of the French public have used homeopathic medicines, and 39% of French physicians have prescribed them. At least six French medical schools offer courses leading to a degree in homeopathy, and homeopathy is taught in all pharmacy schools and in four veterinary schools. 42% of British physicians surveyed refer patients to homeopathic physicians. Another survey of British physicians discovered that 80% of recent graduates wanted training in either homeopathy, acupuncture, or hypnosis. One respected author estimated that 20% of German physicians use homeopathic medicines occasionally. At present, the most popular hay fever remedy in Germany is a homeopathic medicine, and other homeopathic medicines for the common cold, sore throats, and circulatory problems are in the top ten of their respective categories. According to the 2007 National Health Interview Survey in the US, an estimated 3.9 million adults and 910,000 children used homeopathy in the previous year. These estimates include use of over-the-counter products labeled as “homeopathic,” as well as visits with a homeopathic practitioner. Out-of-pocket costs for adults were $2.9 billion for homeopathic medicines and $170 million for visits to homeopathic practitioners. 

So what is one to believe?

Many years later I was stricken with shingles a particularly painful disease. While the disease itself was cured in a few months , really with little help from western medicine, it left painful aftereffects due to nerve damage. We tried a range of western medicines to try and cure the nerve damage with little effect - actually some of these medicines had side effects which were often worse than the pain.After much prodding by my wife and daughter I reluctantly agreed to go an accupuncturist. She was a chinese doctor trained in Beijing and declared that accupuncture could certainly cure this nerve pain through a two or three month regime. So we started the cure and sure enough after three weeks there was perceptile improvement. 


As we know acupuncture is an alternative medicine methodology originating in ancient China that treats patients by manipulating thin, solid needles that have been inserted into acupuncture points in the skin. According to traditional Chinese medicine, stimulating these points can correct imbalances in the flow of qi through channels known as meridiansThe general theory of acupuncture is based on the premise that bodily functions are regulated by an energy called qi which flows through the body; disruptions of this flow are believed to be responsible for disease. Acupuncture describes a family of procedures aiming to correct imbalances in the flow of qi by stimulation of anatomical locations on or under the skin (usually called acupuncture points or acupoints), by a variety of techniques.The most common mechanism of stimulation of acupuncture points employs penetration of the skin by thin metal needles, which are manipulated manually or by electrical stimulation. 
But current scientific research has not found any histological or physiological correlates for qi, meridians and acupuncture points. Other reviews have concluded that positive results reported for acupuncture are too small to be of clinical relevance and may be the result of inadequate experimental blinding, or can be explained by placebo effects and publication bias. The fact is that the invasiveness of acupuncture makes it difficult to design an experiment that adequately controls for placebo effects.
So faced with this welter of facts and counter claims what is one to do? Here is where I come out after my limited experience with alternate medicine:

  • Despite massive research, the verdict on alternate medicine is at best mixed
  • Always remember that your affliction is yours alone and unique and only you can decide how to proceed to handle and conquer it.
  • If conventional medicine has not provided you with a cure, be willing to experiment with alternative medicine. But in doing so, always seek out the most prominent and reliable practitioner you can find and be aware of its negative side effects.





Tuesday, February 5, 2013

On mothers

A dear friend of mine recently lost his mother and wrote a powerful, emotional eulogy for her that I wrote about in my earlier blog. 

The loss of a mother is the most profound event in one's lifetime and leaves wounds that take a long time in healing. For many even the thought of thinking about the departed becomes too painful and they go through life nursing the loss. Having gone through this experience twice in my lifetime, I can confirm that one does not ever get over it and the loss hits you at times when you least expect it and with an emotional force that leaves you drained and dry.

l lost my mother when I was not even thirty and had not enough time to spend with her in her last years having been abroad or away from home. She was a kind and generous soul with a delightful smile and a kind word for everybody. It has been one of my eternal regrets that my children never knew their grandmother and that she never got to see her daughter in law or her two grandchildren and now her great grandchild. She would have been so proud of them but that was not to be. As years pass by, memories fade but I will always remember her sweet disposition with nary a harsh word for anyone and a willingness to listen to everybody's problems and lend a helping hand. She was graceful and even when we kids teased her about her lack of English skills ( she learned the language after she got married and was not familiar with our jargon of the day) she took it all with good humor. Now all I have left are but sweet memories of her:


“a traveller between life and death:
The reason firm, the temperate will,
Endurance, foresight, strength, and skill;
A perfect Woman, nobly plann'd
To warn, to comfort, and command;
And yet a Spirit still, and bright
With something of an angel light."

My mother and I in 1945

It was a few years later that her younger sister took over the mantle. For the next four decades she was to be our surrogate mother guiding us through some of the most turbulent times of my life. I had married out of the family norms and was ostracized for almost decade and through it all she comforted and guided me through the emotional turmoil. She was there when my children were born and became an adoring and loving grandmother to them. To my children, she was "bari Aunty" and they rejoiced in her company even as she upbraided them and chided them into discipline. Actually my own memory of her growing up was of a strict disciplinarian who kept all of us children in line in our grandfather’s house in Lahore. She may have been a “phantom of delight” to her college mates, but to us children she was the "dragon lady", both feared and loved. Of course, all this changed when she got married. She then became our favorite aunt, generous of her time and love. We would spent summers with her in Iklehra, Parasia and Burhar, all coal towns where her husband worked and these were summers we all eagerly looked forward to filled as they were with affection and laughter (and don’t forget the lovely cakes). Later when I returned from the U.S in the early seventies, she was the bedrock of our family and the refuge that both Ena and I repaired to. And even when we left Delhi, on our return she was always the first person we saw and the last person we had dinner with on our way to the airport. She was our confidante, and adviser and our biggest defender and our warmest refuge.  She died a few years ago having lost her will to live after erbs palsy took away her hearing and faltering eyesight. And we miss her.


I only wish that my children and grandchild will remember and cherish their memories of their grandmothers- one whom they did not meet and the one they did. And that their  wisdom, generosity and compassion will live on through them.

"Badi Aunty" in 2009


The right age

Women in their 20s are often told they're too young to settle down. Then, seemingly overnight, as they move into their 30's, they start hearing that they're spinsters. What gives? 

Women today, in certain milieus, find themselves placed into one of two categories: too young to settle down, or too old to find a man. It seems in this view that there is a window of opportunity to get married, but it is ephemeral almost to the point of non-existence. It falls at a different age according to region, or the idiosyncratic biases of one's circle, but hovers around 27

Here's how it works: A young woman hears from friends and family that she needs to focus on her career or education, not some guy. She is warned of certain dangers: unsolicited male attention; unintended pregnancy, as if intended pregnancy were also a thing; and the desire hardwired into all straight men to turn their girlfriends into 1950s housewives. To entertain the possibility of it being difficult to find a husband, to even utter the expression "find a husband," is to regress to another era. And this advice is incredibly appealing, a rejection of the quaint notion that female heterosexuality is the desire not for men, but for a white picket fence.

And then, suddenly, the message shifts. A not-quite-as-young woman will learn that rather than having all the time in the world to start a family, her biological clock is about to strike midnight. That even if she doesn't want children, she is now on the cusp of being too old to find a husband. Hasn't she heard of the man shortage, which only gets worse with age? 40-year-old men can date any 23-year-old they want but not women. And what about those degrees, that burgeoning career that she has so assiduously built ? Should they all be forgotten just so that society may realize its dreams of all women being married before they turn into spinsters.

As it stands, women in happy relationships are under pressure to exit those so as not to be 20s-something child brides, while ever-so-slightly older ones are asked to settle, chastised for having given up Mr. Almost-Right back before they got haggard. And even if they have, by some miracle, remained attractive, it's all a mirage, because you can't fool nature


Of course the window of opportunity emerges from certain facts: reproductive technologies have extended female fertility, but the age at which one may feel too young to settle down is increasing at least as rapidly, and with no end point. Men and women are in school for longer, and often financially insecure. What is socially constructed is the sense of urgency. The world does not end when a woman marries too early (within reason; note the use of "woman"), too late, or not at all.

The urgency comes from expectations younger women internalize. Reflecting on her college years, Kate Bolick (then 39 and single) wrote, "We took for granted that we'd spend our 20s finding ourselves, whatever that meant, and save marriage for after we'd finished graduate school and launched our careers, which of course would happen at the magical age of 30.

The problem, which Bolick grappled with, is that if one is to be single throughout one's 20s, yet married for all of one's 30s, this leaves rather little time for meeting a boyfriend, marrying him, and having children before 35, or "advanced maternal age."

Straight men, meanwhile, do not face these pressures. A man who marries young may be thought more responsible. No one will assume he gave up on his career for some girl. And a man who's 35 and is still single is not assumed, by virtue of his age, unmarriageable. One not interested in marrying is generally assumed to be living the life he chose, not to have failed to find a woman in his thicker-haired, pre-paunch days.

While women had long been warned of becoming 'spinsters,' what's new is that the message arrives with a thud. Women are now asked to live by second-wave feminist principles, until, boom, they're informed that they need a man no less than women ever did. The same friends and relatives who once gave advice informed by The Mary Tyler Moore Show have now let The Dick Van Dyke Show be their guide. 

The answer is not to radically revamp expectations. As a rule, it makes sense to encourage the young to see what's out there before committing. The stability of marriage in upper-middle-class circles likely owes something to premarital trips around the block. French politician Léon Blum's 1907 argument in favor of both sexes experimenting prior to settling down, as a way of improving marriages once they occur, has been largely vindicated. Meanwhile, there's no point denying that if one wants to start a family, there's an age past which this becomes more difficult, especially for women. 

But individual cases are, well, individual. A 22-year-old may already have had all the dates or relationships she wanted, and be prepared to commit. While the woman of a certain age who regrets dumping a long-ago boyfriend has become something of a cliché, there probably are women who regret ending things simply because those whose advice they value urged them to move on. And while romantic options tend to decrease with age, there is no official end date to when a woman can find a husband.

It is time for all of us to become more accepting both of women settling down younger than the "right" age, and of women remaining unattached past that point. In the mean time, women should do as they please, and care less what those around them think. 

For the fact is that there is no right time to marry. The only right time is when the mate is right!