12 December 2013

We need to better force the technology to come to the person instead of forcing the person to come to the technology


If one were to take my education and professional background and turn it into a survey that could be aggregated to derive trends, my profile would not be included in the STEM category. My bachelors degree was in marketing, my masters degree was in international affairs, and my work is in international development. Minus a year of directly being called an IT project manager, broad categories would classify most, if not all, of my experience as business or liberal arts. Fair enough, right?

Not really. For those of you who know me, you know that my specific brand of international development is mostly focused on the role of information communications technology (ICT). Ironically enough, it was through the student international affairs organization I ran in undergrad that first alerted me to the power of ICT in economically emerging contexts, as it was through this organization that I helped build my first website. From there, my professional work has included all kinds of research, UI design, web content, and information architecture – all to do with the T in STEM – technology.

Yet despite my total embrace of using and wanting to learn more about technology in my career, I am not considered a part of STEM fields. Not only does it make it insanely difficult to talk to both engineering companies and many of my international development colleagues about what exactly I do, it’s discouraging and often isolating to not be given much of a platform to embed myself further in the intersection of the two fields. It’s also misleading. If I were to do the exact same work at an engineering company, my title may change, and my inclusion in the STEM category also may change, even though my job didn’t.

I am an ardent supporter of all of the initiatives of the White House, major corporations, academic institutions, and the like to actively encourage and recruit females to take on the hardest, most challenging, most complicated STEM majors, jobs and career paths possible. But while I think these campaigns and initiatives are incredibly important to the future of female-kind, and really world-kind, I think there is a huge opportunity that is being missed. And that opportunity is mainstreaming technology; it’s forcing the technology to come to person instead of forcing the person to go to the technology.

The UN is a perfect example. There are thousands of brilliant, analytical, logical thinkers that work at the various agencies and the Secretariat of the UN. But because staff members were not forced to be exposed to technology early on, and were subsequently able to get away with never learning much about their computer, with very few exceptions, anything ICT is considered tangential or limited in role to the core operations.

Of course, technology is merely a facilitator of solutions, and often not the solution itself. And this is precisely why it is such a waste to not be promoting technology as a mainstreamed component of non-STEM classified professions. Even if the US Government and every major tech company were to allocate a significant portion of their budgets to promoting STEM majors in high schools and universities around the world, it may take decades to significantly shift the current 24% figure of total STEM roles being done by women. Even if it didn’t, what about the generations of women – mine included – who are currently in the workforce in non-STEM classified roles? Should they be ignored if they can’t afford to go back to school and get a relevant degree?

Consider some of the most popular (by number of women employed) industries for women in the States and Europe:

·      Education
·      Human Resources
·      Social Work
·      Fashion/retail

Based on my experience, I can safely guess the majority (or at least close to half) of the people employed at the UN in professional capacities are also women.

So imagine then that in addition to encouraging women to go into unequivocal, undisputable STEM professions, we also mainstreamed the use of technology in the above professions? I’m not speaking in small strokes, like having a teacher give homework out to students via a tablet, I mean actually require a technology component to these professions.

Imagine if your primary school teacher set up the information architecture to have each student digitally paired with a foreign pen pal? Or the HR rep worked directly with software engineers to set up how job-sourcing sites interact with their department? Or a social worker design an app to pinpoint areas most prone to a certain type of violence, so programmatic efforts could bring workshops where most needed? Or a wedding gown designer worked with a 3-D printer to prototype new designs on different body types?

Sure, there are undoubtedly women who are doing each of the above, but how many of them are doing ICT work for their jobs electively or as a requirement? Imagine how many more people would reach a much higher level of ICT competence if we figured out a way to make this a core requirement of all work? The possibilities are endless, yet we seem to disproportionately focus on driving people to STEM professions instead of promoting STEM applications in all professions. As a result, the above functions now almost entirely go to engineers, programmers, or IT people instead of being done by the experts in the actual profession.

Of course one could make the argument that cross-disciplinary education or professional experience is always useful, so this idea is already being done. And indeed, some links between do exist between traditionally liberal arts and STEM degrees and professions, such as international development and science (public health), art and math (graphic design, architecture), and sociology and engineering (urban planning).

One could also argue that not every job can include every kind of function. True, though 20 years ago, many people would have argued there is little to no application to use the Internet in their profession. 30 years ago, the same was said about computers; 100 years ago, the same was said about human resources departments. The point is that as the world evolves, so do the requirements to make it function. Not giving everyone the opportunity to exercise what is increasingly being seen as a life skill - the use of STEM in the work place - sets a dangerous precedent that will surely affect generations like mine that did not have STEM requirements past high school.

I like to tell myself that had I known there are degrees focused on the intersection of technology and international development (called ICT4D), I would have gone for that instead of an M.A. in International Affairs, but I didn’t, and I am not sure when I started my Masters if I knew that’s what I even wanted. It wasn’t until I started researching for other classes that I realized what is out there, but by then it was too late; I was already an expensive semester into my courses, and there were neither any tracks nor classes in my department geared towards this ICT4D intersection.

Ultimately, what I am saying is that if we want to encourage any group of people, be it women, minorities, or the poor to go into STEM fields, we need to do a better job of making STEM work applicable and apparent to those who are choosing or already in another field. I myself chose international development out an overwhelming desire to make the planet a better place; had I known how critical technology could be to the equation, perhaps I would have gone into a STEM field and worked my way into the international development world. Better yet, maybe the Mala of the next generation will choose exactly the same degrees and work places I did, but have the option to better engage with technology. Who knows, maybe next generation Mala will even be included in the elusive STEM category.

13 November 2013

Big Data Needs to Make the World a Better Place


Big data has been used for the better part of the past decade, some of the first uses being by epidemiologists and biological scientists to track the spread of disease. Though Google Analytics launched in 2005/2006, my undergrad degree in marketing at a major university in the US was at that same time and involved little to no mention of the power of aggregated, automated data. Today, big data and analytics in the Global North[1] are used as a core part of the marketing, advertising, and entertainment industries, to name a few.

When applied to contexts in economically developing countries, even the perfect set of data is not enough to predict what will happen, as infrastructure is lacking, government systems have limited capacity, and a large part of the population is impoverished. Still, the power of big data is no less dramatic than say, for the first time in world history, getting some kind record of the overwhelming majority of the world’s population. If someone can figure out how to aggregate this data (respecting privacy, no doubt) in a meaningful way, we can figure what the situation is – be it the impact of a natural disaster, rates of hunger, spikes in unemployment, increases in violence – when it is happening. And that means we can respond more effectively and efficiently.

Whereas a large push to use big data in sub-Saharan Africa, South Asia and parts of Latin America is to better human development, big data in the Global North has largely been used for for-profit endeavors. There have been, however, community-betterment efforts of big data in the Global North. Bloomberg used big data to reduce crime in New York City, the US Center for Disease Control uses big data to help track the flu, and crowdfunding sites online use big data to help people get money for projects that they could otherwise not fund.

Despite examples of social good uses of big data in the Global North, as a proponent of using big data as part of international humanitarian and development solutions, one thing does continually bother me – though big data has revolutionized certain industries, I am not convinced it has made countries that use it the most any better. The past 10 years have seen the rise of big data, but they have also been difficult years for the Global North, not least of which, the United States.

Here are some statistics about the US between the years 2000 and 2010:[2]


According to the American Foundation for Suicide Prevention, suicide death rates rose.












The U.S. Census reports poverty rates rose.










Inflation data shows the cost of higher education versus inflation skyrocketed up.









Of course all of the above are due to a huge set of variables, including national policies, consumer behavior, and the world economy. Even if we were able to model every driving factor and outcome of the trends listed above and present that data to everyone who has any influence, there is no guarantee a person will make the best decision for either themselves or for the country. Likewise, when politicians, constituents and bystanders make decisions, there is no guarantee good data will be followed.

But why is it that with all of the data available, dialogues going on in present-day America reflect how bullying, a lack of respect of women, mass shootings, and unfair immigration practices are destroying our country? With all of the data available, why are we still talking about the same topics as 10, 20, 30, even 40 years ago?

I personally think the reason goes back to the predominant goal industries have in using big data - to make a profit in the short-term. There is a huge imbalance to what big data can currently offer to the average person in countries that are at the forefront of its usage curve. We have figured out how to generate an automatic song recommendation to someone with a few clicks of a button, but we have not figured out how to use big data to keep someone out of poverty. You can use big data to figure out what pair of shoes to buy based on your preferences, but not a customized plan based on those same characteristics – occupation, location, age and interests – to overcome suicidal thoughts.

Big data will not be able to tell us whether the world is a better place. That is determined by other measures - happiness, access to services, security, safety, expression, among others. What big data can do is improve the performance of the measures we use to determine if the world is a better place. It’s time to make big data less about consumer products. Simply put, big data in the Global North needs to become more relevant to issues that truly matter.


[1] When I use this term, I generally mean economically developed countries, including the US, Canada, Europe, Australia, Singapore, Japan and New Zealand
[2] To be fair, not all is lost. Notably, murder rates are on the general decline.

25 October 2013

A Fundamental Problem in ICT4D Projects

In July, I wrote a post about how international development needs to disrupt itself. In talking to development practitioners since, I realized the issue of timing in ICT4D projects is one that many have encountered, so I thought I'd elaborate on what exactly I mean.

Of course there are many elements that make up an ICT4D project, but simplicity's sake, let's say we can roughly split up the project into two elements - the ICT platform itself and the international development context. The former is easy to understand; the latter includes everything from the problem statement, to the economic implications, to the need, to the funding, and the monitoring and evaluation metrics and systems. 


For example, suppose you want to design a project that allows female small merchants in Senegal to deposit their earnings
via a free SMS in order to reduce the rate of robberies in the area. Your ICT platform would obviously need to work with local Senegalese mobile phones and carriers, connect to relevant financial institutions, and be able to be monitored. Development of the ICT platform will certainly take time, though given the number of open sourced platforms there are, the amount of technical documentation, and prevalence of SMS campaigns and usage, one would imagine the ICT platform development is fairly straightforward in competent hands.

On the other hand, building the international development context is less straightforward. How many women have access to phones in this particular region of the country? Are there negative cultural implications to these women having phones or handling money, including electronically? Would the possible stakeholders/clients/beneficiaries use such a tool? Even if all of these questions have been answered before somewhere else, do those answers apply here? Further, who is going to fund the cost of development and implementing the project? How is success going to be measured, both in terms of usage and determination of usage?


Timeline of ICT Platform and International Development Context Development




Unsurprisingly, it has been both my experience and the experience of most (if not all) ICT4D practitioners I know that the project development timeline looks like the above. A working beta or even version 1.0 of the ICT platform is created and ready to be deployed well ahead of building the international development context, especially in determining steady funding and making sure beneficiaries/stakeholders/clients know about and are prepared to use the platform.


What this essentially means is that for two reasons, the ICT development lifecycle doesn't flow as it would otherwise:



ICT Project Development Lifecycle

First, after creating the beta/version 1.0, without having beneficiaries/clients/stakeholders ready to test the tool as it would be used in the actual project, knowing to whom, where and how to deploy is often unknown. Imagine building a piece of software that does taxes, only you're not entirely sure you'll have access to people who actually pay taxes, so you don't know whether to make the software into something else and expand the scope of people who might want to use your software.


Second, critical and relevant user feedback is therefore also often not possible. Instead of giving your software to
people who pay taxes to check to see if the software calculated their tax return properly, all you can do is give the software to people who don't pay taxes and see how much they understand even though they're an irrelevant group of people. You'll get feedback on the interface, aesthetics, and functionality of the software, but you still won't know if the software actually works before you have go back and make corrections for version 2.0.

  
In an ideal world, the iterative process in ICT platform development would happen indefinitely as technology evolves. In reality, resources in international development are scarce given the size of a problem, and highly skilled technologists can only be secured for small periods of time. 

 Going back to my other post, what this means is that the international development side of ICT4D projects needs to disrupt itself. Personally, I think there are ways to use existing ICT platforms to more quickly, more efficiently and more accurately to build the international development context. I have a few ideas of how to break these context parts down, and the accompanying solutions are something I hope to explore in the future.  

07 October 2013

The Monetization of Risk in Our Lives

I have been lucky enough to do a lot in my life so far. If I were to die tomorrow, my range of experiences could compete with the lifetimes of many people in the world. Certainly two of the main reasons for this are due to my personality and interests. However, it would be in extremely poor taste if I neglected to mention one of the other main reasons - my monetary risk in doing what I love is very low. And in America, monetary risk is everything, even if it shouldn’t be.

For all of the great things about the United States, certainly one of the country's national shames is the number of people living in poverty. According to recent census data, 46.5 million people, or about 1 out of 7 of the total US population lives below the national poverty line. To get a better sense of what this means, visit the Occupy movement website or the many progressive content curation sites online, which have done a thorough job of creating infographics explaining the income inequality in this country that underlies how a nation so rich[1] can have so many people who are so poor.

When compared to Western Europe, the UK, and Australia, the lack of social safety nets afforded to American citizens and residents is both obvious and astonishing. Sure, one could argue the quality of services and systems in the States is superior, but the fact remains that in order to get health care[2], a higher education, or to find access to reliable public transportation, most people living in America do not have a free or even cheap option. If you are living in America and do not have a certain level of income (be it your own or through someone who is supporting you), you could easily starve, drop out of school, or die of a treatable condition.

Opinions aside on what the solutions are to this rather abysmal problem, these lack of social safety nets present a phenomenon that seems to be increasing in intensity as the economy continues to stagnate and the GOP shuts down the government – the monetization of life risk. At the UN World Food Programme, I worked on a project that quantified drought risk in terms of dollars. That is to say, if a drought were to occur, what is the cost of dealing with its effects? For the insurance mechanism I was working on, this method of approaching risk makes sense. In life, monetization is only one way to approach risk, yet it dominates the way we think of success (and therefore success in mitigating risk).

I have had every incentive to work hard to build my own career, to live in a place I enjoy, and to buy the things and fund the experiences that I want. However, being an incredibly fortunate individual who was born to parents who are both willing and able to provide me with a roof, food and clothes whenever I'm in need, I have been able to approach the world with the idea that if all else fails, I will still survive with a decent quality of life. If I completely fail to financially support myself, I will not starve or be forced out on the street. In having this kind of freedom, I have been able to measure the risks of my life in ways other than monetarily – the risk of being unhappy, the risk of not pursuing a career I want, the risk of not helping others in my work, the risk of not being surrounded by a community I find desirable.

Of course approaching life this way has led to its ups and downs, though even in the lows, I have the mitigated the risk of unhappiness in that I have broken ground on something that is truly important to me. Of course many people who do not have the safety nets my family has provided have also approached life this way. And certainly what we desire in life shifts as what we have available to us changes; someone who was not born into a life that allowed for basic amenities may simply want to provide, may simply be happy in providing that security to their children.

However, it also stands to reason that in not living in a country that provides enough to survive (let alone have a decent quality of life) to everyone, a very significant portion – likely even the majority – of the American population has had to make decisions based primarily, if not entirely, on monetary risk. Put in a very cliché way, much of the American population has approached life knowing they cannot follow their dreams if their dreams do not pay enough. It doesn’t matter if someone in this segment of the population is an especially talented computer engineer, grief counselor, artist, or teacher. If they feel as though the monetary returns in their investment to pursue what they want cannot provide enough to survive, there is no adequate safety net on which to fall back.

Some would argue that the incentive to simply survive means that people work harder than they would otherwise. Indeed, The Economist reported in their latest issue that the Pew Research Center found only 59% of Americans feel as though the government has a responsibility to take care of people who cannot take care of themselves, down from 71% during the Reagan years (1980s). As an international development practitioner, to that I simply say, look at the world around you. The basic need to survive in countries of South Asia, West Africa and Central America is not enough to take care of everyone when there is not enough to go around or the reality in place systematically keeps people at the bottom. Creating more from something is much easier than creating something from nothing.

Optimistically, this country has not wavered on the idea that anybody can become successful, regardless of race, creed, religion, or background. Unfortunately, with the increase in people who do not want social safety nets for all and the ever-declining social services provided in the United States, putting non-monetized life risk ahead will always have the danger of potentially not surviving. Without any breathing room, someone who could have been the next Albert Einstein may never feel empowered to move beyond a life as a cashier. And why would they? The risk is too high.


[1] The 2012 GDP of the United States was USD 15.68 trillion.
[2] The States that have embraced the Affordable Health Care Act will be able to provide much cheaper options to more people starting in January 2014. Still, these options are much more expensive that our Western counterparts.

17 August 2013

The Danger of Normalizing Obesity


As a kid, I spent a lot of time with my grandparents. Like many first children of a household, I was born in a period of my parents’ life during which they were still establishing careers, house, etc. So contrary to my brother’s upbringing, much of my earlier years were heavily influenced by my mother’s parents. Besides having a natural sweet tooth, my grandmother grew up in abject poverty. When she immigrated to the States, she took great pleasure in discovering sugar in all of its American forms. Since sugar was a commodity of luxury in India that she had often been denied, her natural instinct was to shower me with all my heart’s content. Having a strong sweet tooth myself, yet little moderation, I was obese by the age of eight. 

The peak of my heavy period was my senior year of high school. I spanned out to tightly fill a size 13 pair of jeans.[1] Certainly a 13 isn’t the most extreme of American sizes, but considering my age, that I came from a highly educated family with good constant access to healthy food, advanced medicine, a life that allowed for recreation time, and a solid understanding of the long-term health implications of obesity, my size was hard to justify short of a psychological explanation, which I have to admit I could not blame. It wasn’t until I saw myself on TV towards the end of that school year that I realized how large I had become.

I decided to make a lifestyle change, and have been an avid gym rat ever since. I take bad food in moderation and eat plenty of the good, all in balanced portion sizes. In the ten years since I made that change, I have lived everywhere from the cusp of the Sahara desert to some of the most expensive parts of Europe and the States. I have been through major surgery, dealt with a few traumatic events, and been through many highs and lows. Understandably there were some bad weeks in these ten years. Still, through it all, I have remained very consistent in always finding a way to exercise, to get proper nutrients, to find a way to take care of myself. After losing that weight, I become a happier, better, stronger person. Most of the health problems I had went away, I stopped having mood swings, and even my skin and hair look better.

When I was in grad school, I took a small road trip with two of my classmates and the sister of one. During that trip, the sister explained that she was working on a thesis examining a media movement in South Africa among HIV/AIDS infected people. While the movement did a lot good in bringing a much needed voice to those who were infected to ensure they were being treated fairly and HIV/AIDS as a disease was understood, the movement went too far as to normalize the disease itself. It was diminishing the gravity of what it meant to be infected. It was building an exaggerated sense of community in HIV/AIDS patients that it had almost become a badge of honor to be infected. Public health officials were concerned the simple fact that AIDS is a life threatening disease was being lost in the messaging of the campaign.

I by no means wish to suggest that discriminating against someone who is obese is anything but illogical. We all know physical lazy people who are mentally very driven, and physical driven people who are mentally very lazy. Being thin doesn’t automatically mean someone is healthy or in shape, and being overweight doesn’t mean that someone doesn’t care about their body. And in fact, it is not uncommon to see blogs, online forums, commercials and other media outlets promote the very pragmatic ideas of being comfortable in one's own skin, not being obsessed with size, and having a healthy self-image. 

Though not as extreme, my concern, however, is related to the example above: obesity is becoming normalized. Being obese is often now explained as “just another trait” some people have, like one’s race or sexual orientation. The problem is that in reality, the message of self-confidence can be misconstrued to mean that being concerned with one’s health and weight are categorically superficial goals. In that skewed interpretation of being comfortable with oneself, self-accountability for taking care of oneself can be demonized. Valid concerns for another person's health can be branded as shallow. Telling a friend they should quit smoking is socially acceptable; telling a friend they need to lose weight is not. Addictions to specific substances are a bad thing; arguing overeating is an addiction is not even considered to be a thing by many people.

Of course, losing weight is not the cure to all health problems, of course there are many physical and mental ailments that might prevent a person from being healthy, of course there are cultural implications for a lot of people in making healthy choices, of course the cost of being healthy in America is prohibitively high for many, of course modern medicine has increased life expectancy drastically even as the world becomes increasingly sedentary. But regardless of the reasons behind the problem, being obese does have many negative effects on the body and mind. As is the case with any disease, not treating both the cause and the symptoms can have dire consequences. The literature of what obesity is costing the United States alone in health care is endless. The quality of life obesity strips away is a finer, but equally important point. Looking at my life is a perfect example.

When I made that lifestyle change, I finally embraced the idea that wanting to be fit is not embedded in superficiality. Laughing at consistently unhealthy eating habits is a detrimental defense mechanism. Writing off a total lack of physical activity as “understandable laziness” is not an excuse. Increasing my clothes size every year is not a suitable alternative. My decision to lose weight was not based on media influence of impossible-to-achieve standards of attractiveness. It was not predicated on some idealistic notion of finding the perfect partner only after attaining the perfect body. It was a decision to become accountable for my own choices, and to allow my health to take precedence in my life.

I can only hope that individuals who are going through the battles of obesity make the same moves to learn how to prioritize eating well, exercising, and staying away from harmful substances (or, equally - harmful amounts of substances). Attributing unhealthy behavior to a beauty counter-culture is not the solution. Finding excuses instead of answers is not the solution. Normalizing obesity, justifying obesity, is not the solution to the epidemic. 


[1] Back then I shopped in the juniors department. The adult American equivalent size is 14; the adult European equivalent size is a 44. For those who have seen me in the last few years, I currently wear a size 8 (38 in Europe), though my body composition is now of much more muscle than back when I was in high school.

08 August 2013

Now more than ever in America is a good time to be gay, but will it stay this way?


 "I support same-sex marriage."

In present-day, 2013 America, saying this statement means you are in agreement with about half of the country. When I said this statement my second year of university, I was at the extreme. 

Back in September 2004, I took a required public speaking class in college for which the final assignment was to make a 5-minute oral argument on a well-known issue. I picked same-sex marriage. My arguments were fairly straightforward talking points: 

  1. The United States is a secular nation
  2. The American Psychological Association has definitively concluded homosexuality is not a mental disorder, nor is it related to dangerous or "perverted" behavior
  3. Same-sex marriage extends thousands of state and federal rights to law-abiding, tax-paying citizens in committed relationships with their partners
  4. It protects children who have LGBTQ parents or guardians
  5. Its economic benefits are widely recognized

In other words, my arguments were more or less what you hear nowadays. The difference between 2004 and today is the reaction. Any person who was old enough and paid attention to the rhetoric during the 2004 US Presidential elections likely recalls the vehemently negative portrayal the Republican party used against LGBTQ communities. Foreign policy, the economy, and war took a backseat to the backlash waged on the sexual orientation minorities. And the worst part of rhetoric? IT WORKED. George W. Bush won the election.

America is a country whose laws are based on the idea of historical precedence - that is, an argument for or against a law can be built on what was decided in the past. Of the many arguments in favor of same-sex marriage, a very compelling one follows suit with Brown v. Board of Education, whose ruling that overturned "Plessy v. Ferguson", and dismantled "separate but equal" laws allowing public places to be segregated by race. The mere fact that there were two different systems for two different sets people made the systems inherently unequal. 

Since gay and lesbian people cannot enter into the same kind of physical, emotional and mental commitment with the opposite sex as straight people, since same-sex marriage applies to two consenting adults, and since denying someone's rights based on archaic notions of what is right is simply wrong, the majority of the country now agrees that same-sex marriage should be allowed. The majority of the country now agrees with me.

Back then, giving a speech like I did in favor of same-sex marriage automatically relegated me to the "very liberal" or "socialist" category of the political spectrum. Those of us who were in favor of full same-sex marriage were often dismissed as extremists and therefore often written off. Even most of my so-called socially liberal friends would only go so far as to back civil unions, most moderates I knew didn't dare make a stand for any kind of same-sex rights, and most conservatives I knew were categorically against any LGBTQ rights, same-sex marriage and civil unions included.

Yet today, less than ten years since I made that speech, all of the points I made that were then tossed out as liberal banter are now seen as mainstream logic. Whether it's the media total embrace of an open and proud LGBTQ community, whether it's politicians finally recognizing the financial power of the LGBTQ community, or whether the power of the Internet making the LGBTQ more ubiquitous and visible, the tides have certainly changed in favor of same-sex marriage.

But will it stay this way?

In 2003, Supreme Court justice Scalia wrote in his dissent to the majority opinion of Lawrence v. Texas that striking down an anti-sodomy law would pave the way to same-sex marriage. Thus far, Scalia seems to be correct. The swing towards same-sex marriage equality is gaining momentum. The question that a lot of us then extremists, now normalists have is whether the upward swing towards fully embracing LGBTQ rights in America is here to stay or is simply a social trend that will oscillate back. Some may call this line of thinking paranoid. Given current events, however, I call it cautious optimism.

There's no doubt that since the Supreme Court since decision of Roe v. Wade was handed down in 1973, the constitutional protection for women to abort the fetus(es) they are carrying up to the third trimester has fueled much controversy. Indeed, some now argue that part of the reason the pro-choice movement has taken a beating in recent years is because the Supreme Court decision was on the back of a temporary social movement. With the resurgence of a very socially conservative and very powerful Republican party, the movement to make abortion rights a moderate issue has faded. As a result, some states in the south have effectively closed all abortion clinics, and made abortion services nearly impossible to access. And they've done this with widespread voter approval.

In high school, I became a fan of a Russian pop duo known as t.A.T.u. Their schtick was that the two members of the group, who were young females, were gay and possibly in a relationship. Though the founder of the group went through very valid scrutiny of promoting a pedophilic image of the two girls (who were 16 when they started t.A.T.u.), the point is that this group was allowed to gain huge public momentum from their own country, and eventually became one of the best-selling Russian artists ever. English and Russian language t.A.T.u. albums have sold more than 15 million copies worldwide.

Twelve years later, Putin has taken the country back to the dark ages of LGBTQ rights, publicly humiliating gay men, beating peaceful Pride protestors, and issuing statements that gay athletes competing in the 2014 Sochi Olympics may be detained and jailed. Surely a musical act such as t.A.T.u. that openly displayed acts of lesbianism would not have survived or thrived in this environment.
 
Of course LGBTQ rights in America differs greatly than abortion rights. A gay person can be of any ethnicity, race, religion, gender or socioeconomic class, whereas the majority of people who receive abortion services tend to be of poor socioeconomic status, and are of course all women. With the power and visibility of online platforms, with the slow nature of establishing a long-term relationship, and with the sheer financial power many LGBTQ people have in major cities, one would think that as a greater community, positive change for LGBTQ people will have greater staying power.

Likewise, America is not Russia. Many politicians in this country certainly have incredible power at their whim, but as flawed as our democratic system can be, it is one of the most stable in the world. Waving a wand and summarily outlawing homosexuality (or any other attribute) across the country has very different consequences for our elected officials than it does for Putin, and will therefore not happen in such a swooping, unquestioned and uninhibited way.

Still, the backlash against pro-choice advocates and LGBTQ people in Russia does make me stop and question. 2004, after all, was not that long ago. The sweeping support the LGBTQ community has received in recent years is hard to explain by even the most informed political and media experts. Since we do not know how or why it happened so suddenly, and since present-day events are now telling us that social progress can be a temporary phenomenon, us then extremists now normalists are left to wonder whether this relatively golden era of being gay will indeed stay this way. 

29 July 2013

International Development Needs to Disrupt Itself


A few months ago, I heard Joi Ito, the current director of MIT’s Media Lab speak. One of the points I found the most telling in his talk was the idea of how the Internet has disrupted the traditional order of operations in creating a new technological innovation, especially at the global level. Due to heavy research and development costs, overhead, and patent protection, technological innovation pre-Internet often came in a very top-down approach at the behest of large government organizations or private corporations.

The Internet changed all of that; the Internet allowed individuals or small groups of people with little to no assets to create something and find investors to bring it to scale. One of Ito’s examples was YouTube, which was hatched by three software developers in 2005. Not knowing what the precise use of YouTube would be, the developers concentrated on creating the product, and eventually attached it to an online dating service that allowed customers to upload video clips. With time, YouTube migrated to MySpace before becoming its own platform that was eventually bought by Google.

This sort of disruption is seen on a daily basis in New York. In fact, Mayor Bloomberg even started a large campaign called Made in NY, which gives media backing to more than 3,000 tech start-ups in the city. From social media, to analytics, to online shopping, to crowdfunding, interactive mapping, and more, the transformative potential of the next wave of Internet behemoths is well known and well documented.

Indeed, this wave of innovation and entrepreneurship extends far beyond the reaches of major tech cities; in fact, it goes beyond small cities in the States – including my hometown of Richmond, VA. GSMA’s Mobile for Development Intelligence recently published a report on scaling mobile technology in economically developing contexts. In the report, they write about seven Internet innovation centers across Africa, including iHub in Nairobi.

If Internet technological innovation at this level has made it this far, if Internet technology is so pervasive in our lives, why then has it not been embraced at a strategic level in much of the international development community? ICT brings efficient, low-cost solutions, which are two characteristics continually sought out by an increasingly financially strained donor community. Among many other reasons, I believe one of the driving factors is that the disruption that has occurred in the ICT community has not happened in international development world.

One of the favorite international development expressions is,

            “In the context of.”

We as development practitioners are taught to build a “context” – a case – for everything that we do, and often for good reason. Solving one small problem in an environment with little infrastructure, high gender inequality, the majority of a population impoverished, and with highly imperfect systems means that nothing can be looked at in isolation.

A lack of due diligence before deploying a solution could very easily negatively affect those we are trying to help. Distributing computers to slum-dwelling youth to better education resources might put them at high risk for violence by non-recipients trying to steal the computer. Connecting adult females to job boards targeting women might ostracize these new workforce participants from their community. Automatically distributing internationally sourced food at the end of a bad crop season might crowd out farmers who cannot afford to sell their stocks at a lower price.

It takes time to understand a context, to understand how a solution to one problem can and should work in conjunction with solutions in the same community. It takes time to conduct literature reviews, secure ongoing donor funding, find political support, and build an argument as to how, why and when a solution should be implemented.

What this then means is that the pace of ICT solutions on the technical side often outpaces the international development side. As we all know, a perfectly executed piece of software, hardware, operating system, whatever, is useless if unused. Without constant market feedback loops telling us whether a solution has worked, the iterative process to innovation loses steam quickly.  

I have no doubt that ICT-based international development solutions will continue to grow in prominence and relevance in coming years. However, in order to achieve a better value add, the international development community – practitioners, donors, governments – will have to change the approach. We have to figure out how to disrupt ourselves.

21 July 2013

Discrimination of the Unknown and Assumed


A few years ago, I read an interesting article about how getting accurate readings on which way a district would vote in terms of American political parties requires an increasingly micro approach. In other words, overall voting results are ideally predicted at the neighborhood level. Indeed, my experience as an Indian-American, Hindu woman growing up in Central Virginia is also very specific to the exact time and place, and may in fact have little to no reflection of how two of my cousins – also Indian-American Hindus – will feel about their childhood in a neighborhood just 40 minutes away, 15-17 years later, once they are my age.

That said, I grew up in a place and time where Indian and Hindu cultures were unknown and therefore often hated. I could count the number of non-black, non-white, non-Christian people in my middle school[1] on one hand. Being a minority of a few unknown types prompted questions from both white and black kids, parents, teachers, and strangers:

“What are you?”

“What the hell is ‘Hindu’?”

“Indian? Like Pocahontas?”

Most of the time the questions were in a harmless tone, most of the time the questions were out of pure ignorance, but most of the time the questions were completely rhetorical. Most people weren’t asking to learn about something new, they were asking to highlight the differences between us. Even so, as I got older and learned how to speak my mind, I took a question as a question and would answer as well as I could, when I could. Sometimes, a curious thing would happen  people would listen. They’d hear about the Indian subcontinent, the openness of Hindu scripture, and the amazing ancient Asian societies that existed before the Bible was even written.

Those of us who are of a relatively unknown race or religion in our environment know that we are sometimes given a chance to explain who and what we are, even if it’s to say our race or religion are not a fundamental part of what makes us who we are. It’s not an omnipresent phenomenon and it’s not an easy battle to fight, but at least it’s an opportunity that does, on occasion, show its face. And with those small occasions, we are able to safely speak our mind and break down barriers of ignorance that come from a place in which people truly have not heard anything either way. Over time, these occasions can add up to create a completely different, more open context and understanding to the immediate environment.

In this way, the discrimination I faced being Indian and Hindu in the time and place in which I grew up fundamentally differed from what the black community, and increasingly the Muslim and Hispanic communities, face in America. The discrimination I dealt with was borne out of the unknown; as I described above, this unknown sometimes sparked questions, which sometimes warranted answers, which sometimes made things more open and better. But what happens when discrimination comes not from the unknown, but the assumed? What happens when a person or society does not ask an even rhetorical question, but assumes they know who you are based on your race or religion or style?

As Obama said in his public statement on the acquittal of George Zimmerman, the problems of the black community stem from a long, complicated, painful and fairly recent history of enslavement and institutionalized discrimination. However, Obama’s family was largely not of this history. His father was from Kenya and came to the States for a brief period in the 1960s; his mother was a white American woman from the Midwest. Yet Obama, a half-black man who mostly grew up in the States, has written in detail about his experience of automatically being relegated to a specific community, a specific set of behaviors, problems, and ideas based on his race. If the current President of the United States has faced these problems, imagine what this means for the diverse communities that compose the millions of black people across the nation.

Knowing that the atmosphere of a room has negatively shifted because you, a minority, entered; that look in the eye showing someone genuinely feels scared or angry at you because they think you are of an inferior race; that sigh of sadness because someone thinks you are going to hell for believing in a different manifestation of God. For those of you who have never felt unsafe because of your race, religion, sexual orientation, or any physical, mental or emotional attribute you carry, these ideas are probably very hard to understand, because these are discriminatory examples that can only be felt. Feelings are hard to capture in laws, policies or constitutions.

Hate Crime laws are one attempt to codify against discrimination. Though I personally think Hate Crime legislation is a tool that sends a powerful message, Hate Crime laws are predicated on the idea of someone targeting another because they are part of a particular social group, and are therefore best at capturing criminals who want to use or are open about using their discriminatory views as justification of their violence. Most people, minority or otherwise, would write these criminals off as extremists, vigilantes, racists, homophobes, or the insane. 

Those of us who have faced discrimination that is borne out of the unknown and/or the assumed understand that racism can manifest itself in many ways, and the hardest battles are often fought against people or societies that refuse to acknowledge there is a problem or are able to deny a problem exists. Purposely targeting someone in a strict legal definition is likewise one of the hardest things to prove and one of the hardest things to codify when trying to demonstrate discriminatory behavior in the real world.

What then happens when these discriminatory sentiments translate into violence? What happens when the George Zimmermans of the world are allowed to carry a loaded weapon on the street, chase down an unarmed teenager who has every right to be where he is, and then shoot this teenager at point blank range after starting the confrontation himself? What happens when the all-white (save one Puerto Rican) jury agrees with the white defense lawyers that the George Zimmermans of the world were justified in thinking this teenager – a black male in a hoodie – looked suspicious, but at the same time, we are not allowed to make the case the George Zimmermans of the world based their judgment call on race because the targeting was not explicit?

Any minority in America will tell you that one of the best defense mechanisms we have in these uncomfortable and unsafe situations is to simply walk away. We live in a country that up until this ruling, in practice allowed us to waive our chance to explain who we are in favor of simply removing ourselves from the situation and from any ensuing danger. The ruling acquitting George Zimmerman of shooting and killing Trayvon Martin has, in part, caused so much emotional turmoil because a person wishing to avoid confrontation that is based on bias or ignorance is now not even allowed to simply walk away. If someone does not like something about who or what we are, if they are able to convince a jury that this who or what we are makes us “suspicious” by default, we cannot walk away from the ignorance or discrimination. In effect, we have lost the right to be who we are and move ourselves to a place where we feel safe as who we are. Those of us who continue to fall into the unknown may have a fighting chance to defend ourselves under this system. But those who fall into the assumed never had much of a chance to begin with.




[1] I have to give credit where credit is due. Though elementary and middle school were less than desirable experiences, I went to an internationally geared high school that was way ahead of the social curve, probably for the country and certainly for the area. It had a well-supported LGBT alliance, offered classes in 13 languages, and (at least for me), created an environment that was open to all religions, races, and sexual orientations and genders.