Tagged: ethics Toggle Comment Threads | Keyboard Shortcuts

  • Ian O'Byrne 6:25 pm on March 7, 2020 Permalink | Reply
    Tags: entrepreneurship, ethics, , objectivity   

    The Prodigal Techbro 

    https://conversationalist.org/2020/03/05/the-prodigal-techbro/

    The tech executive turned data justice warrior is celebrated as a truth-telling hero, but there’s something a bit too smooth about this narrative arc.

     
  • Ian O'Byrne 3:25 pm on March 5, 2020 Permalink | Reply
    Tags: ethics, humanrights, , , ,   

    Recommendations On Content Governance in Digital Spaces 

    https://www.accessnow.org/cms/assets/uploads/2020/03/Recommendations-On-Content-Governance-digital.pdf

    Therefore, this paper does not seek to establish a universal set of specific
    solutions for the complex and thorny issues that content governance raises.
    Instead, building on our experience in policy development across the globe, we
    offer basic human rights-centered guidelines that can serve as the minimum
    basis for governance policies that are fit-for-purpose, given that stakeholders
    must consider the specific actors and technologies in play in their region. We
    plan to elaborate further on regio…

     
  • Ian O'Byrne 2:47 pm on February 13, 2020 Permalink | Reply
    Tags: , , , ethics   

    Democratizing AI 

    Three part series on ethics and artificial intelligence from Richard Whitt,

    Democratize AI (Part I)

    How to ensure human autonomy over our computational “screens, scenes, and unseens.”

    Democratize AI (Part 2): The Personal AI 

    a potentially effective way to challenge the one-sided proliferation of Institutional AIs is the introduction of human-agential artificial intelligence — let’s just call them Personal AIs. These virtual avatars would directly serve each of us as human beings, and our chosen communities of interest — including family, friends, and other social ties. Part III in this series (coming soon) will lay out a proposed action plan — the “how” — to help make these aspirations a reality.

    Democratizing AI (Part 3)

    The thesis is that not just billionaire industrialists deserve to have personalized virtual assistants. Ordinary people should have the ability to own a Personal AI, acting as a fully accountable computational agent to represent their self-sovereign interests. Without our concerted push-back against current trendlines, however, Institutional AIs instead will become the de facto norm of our time.

     
  • Ian O'Byrne 12:33 am on February 6, 2020 Permalink | Reply
    Tags: dataethicscourse, ethics, human_rights   

    Can Silicon Valley be more ethical? Salesforce, Google hired ethicists to rethink processes 

    https://www.protocol.com/ethics-silicon-valley

    In the wake of the Cambridge Analytica scandal, employee walkouts, and other political and privacy incidents, tech companies faced a wave of calls to hire what researchers at the Data & Society Research Institute call “ethics owners,” people responsible for operationalizing “the ancient, domain-jumping, and irresolvable debates about human values that underlie ethical inquiry” in practical and demonstrable ways.

     
  • Ian O'Byrne 3:31 pm on January 23, 2020 Permalink | Reply
    Tags: ethics   

    ITHAKA Next Wave Part 3: Truth, Lies, and Digital Fluency on Vimeo 

     
  • Ian O'Byrne 4:49 pm on September 20, 2019 Permalink | Reply
    Tags: ethics   

    Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics 

    Ethics is arguably the hottest product in Silicon Valley’s1 hype cycle today, even as headlines decrying a lack of ethics in technology companies accumulate. After years of largely fruitless outside pressure to consider the consequences of digital technology products, the very recent past has seen a spike in the assignment of corporate resources in Silicon Valley to ethics, including hiring staff for roles we identify here as “ethics owners.” In corporate parlance, “owning” a portfolio or project means holding responsibility for it, often across multiple divisions or hierarchies within the organization. Typically, the “owner” of a project does not bear sole responsibility for it, but rather oversees integration of that project across the organization.

    A remarkable range of internal and external challenges and responses tends to fall under a single analytic framework called “ethics.” This strains an already broad term that in some contexts means an open-ended philosophical investigation into moral conditions of human experience and, in other contexts, means the bureaucratized expectations of professional behavior. Likewise, it places strain on corporate structures
    because it is bureaucratically challenging to disambiguate whether these problems belong in the domain of legal review, human resources, engineering practices, and/or business models and strategy.

    SOURCE

     
  • Ian O'Byrne 4:48 pm on September 20, 2019 Permalink | Reply
    Tags: ethics,   

    Facing the Great Reckoning Head-On 

    danah boyd:

    “Move fast and break things” is an abomination if your goal is to create a healthy society. Taking shortcuts may be financially profitable in the short-term, but the cost to society is too great to be justified. In a healthy society, we accommodate differently abled people through accessibility standards, not because it’s financially prudent but because it’s the right thing to do. In a healthy society, we make certain that the vulnerable amongst us are not harassed into silence because that is not the value behind free speech. In a healthy society, we strategically design to increase social cohesion because binaries are machine logic not human logic.

    The Great Reckoning is in front of us. How we respond to the calls for justice will shape the future of technology and society. We must hold accountable all who perpetuate, amplify, and enable hate, harm, and cruelty. But accountability without transformation is simply spectacle. We owe it to ourselves and to all of those who have been hurt to focus on the root of the problem. We also owe it to them to actively seek to not build certain technologies because the human cost is too great.

    SOURCE

     
  • Ian O'Byrne 1:20 pm on September 10, 2019 Permalink | Reply
    Tags: ethics, MIT   

    He Who Must Not Be Tolerated 

    He Who Must Not Be Tolerated (nytimes.com)

    Joi Ito’s fall from grace for his relationship with Jeffrey Epstein was much deserved. But his style of corner-cutting ethics is all too common in tech.

    Nothing has changed. I get that not every fortune is clean and that it is impossible for every donor or investor or adviser or leader in tech to be perfectly pure. But if you can’t manage to say a hard no to those responsible for the dismemberment of a journalist or to a predator of young girls, I am not sure what to say.

     
  • Ian O'Byrne 2:13 pm on August 28, 2018 Permalink | Reply
    Tags: ethics,   

    Kant and information ethics 

    A piece by Charles Ess & Mary Thorseth in Ethics and Information Technology Journal (2008). All annotations in context.
    Kant’s basic thoughts on autonomy and the public domain are highly relevant to challenges concerning modern society, particularly to communication in the public sphere. Trust is but one important topic being discussed here; openness another. Thus, our aim has not only been to demonstrate how Kant can be productively applied to new technology; in addition, it has been to show how the basic philosophical queries raised within this context can be fruitfully illuminated within Kant’s conceptual frames.
    In particular, Myskja points out that the largely disembodied character of most online communication thereby cuts us off from important, perhaps crucial channels of non-verbal communication that may be essential to trust-building.
    At the same time, however, especially as the Internet increasingly becomes a primary venue for participating in ‘‘…the political, social and commercial activities necessary for full participation in a liberal democracy,’’ establishing trust in online worlds becomes a correlatively more pressing matter
    phenomenologically-based approach to trust, one that stresses precisely that ‘‘…the bodily presence in the encounter appears to be essential for understanding the relation of trust.’’ He makes this point in part by way of reference to the work of K. E. Løgstrup, E. Levinas, and others – and thereby takes up trust as an ‘‘irreducible human phenomenon.”
    Contrary to what many have criticised as an excessively idealistic Kant in the (in)famous example of the Categorical Imperative requiring us to tell the truth even to those obviously bent on harm, Myskja points out that in Kant’s later work, a more realistic understanding of human nature and thereby, a more nuanced understanding of the role of deception emerges. Briefly, deception may take place for less than ideal reasons – but as deception allows us to hide our more negative characteristics while nonetheless developing more virtuous character, it can help us become better persons. This role of deception fits wonderfully well with what is otherwise often regarded as a highly morally problematic dimension of online communication – precisely that we can there hide our real selves.

    Finally, Thorseth points out that Kant’s notion of reflective judgment is of possible judgments, in contrast with actual judgments – where the former refer to something virtual in the sense of what is possible for human beings to imagine. For Thorseth, the well-known virtual world of Second Life stands as an example of a virtual reality in which a key condition of reflective/possible judgment is met – namely, that we are able to avoid the illusion that our purely private and personal conditions somehow constitute an objective context or reality.
    ‘‘The liberation of our judgments from subjective private conditions is a necessary condition for weighing our judgments with the possible judgments of others, by putting ourselves in the position of everyone else.’’

     

    The basic dilemma is simple. If the algorithms are open – then webmasters (and anyone else) interested in having their websites appear at the top of a search result will be able to manipulate their sites so as to achieve that result: but such results would then be misleading in terms of genuine popularity, potential relevance to a searcher’s interests, etc., thereby reducing users’ trust in the search engine results and hence reducing the usability and accessibility of important information. On the other hand, if the algorithms are secret, then the legitimate public interest in understanding how web pages are ranked is foiled: in particular, users cannot know whether or not a high ranking is the result of payment – and again, such secrecy reduces trust and thereby the usability and accessibility of important information.
    ‘The dilemma, then, is that a right to information could make people worse off in terms of information.’’ Elgesem then provides a contextual analysis of the role search engines play in the broader ‘‘information ecology’’ constituted by contemporary ICTs. Elgesem is able to connect the search engine dilemma with Kant’s second formulation of the Categorical Imperative, ‘‘Act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means.’’ Here, Elgesem interprets Kant to mean that by ‘‘humanity,’’ Kant refers to our ability to reason as the central property that makes us human. The simple point, as emphasized in Kant’s famous example regarding lying, is that failure to provide truthful information is a prime example of violating the CI because false information makes it impossible for the recipient to exercise her rationality. By the same token, Elgesem argues that a biased search engine likewise makes it impossible for users to exercise their rationality, and thus likewise represent violations of the CI.
    ‘‘In a complex information society, with a highly developed division of intellectual labor, we have no option but rely on information from sources that are usually trustworthy.’

    visualizations are more than just ‘‘pretty pictures’’: rather, precisely in virtue of their bringing into play our shared cognitive and aesthetic frameworks as human beings, they thereby catalyze the epistemological – but also aesthetic and thereby social, if not also political – processes that create a shared intersubjective framework in the first place, one that then makes possible trust-building and a shared sensus communis within which the enterprise of collaborative science may take place.
     
  • Ian O'Byrne 3:05 pm on June 13, 2018 Permalink | Reply
    Tags: ethics, lies,   

    The Lifespan of a Lie 

    The Lifespan of a Lie – Trust Issues – Medium by Ben Blum (Medium)

    It was late in the evening of August 16th, 1971, and twenty-two-year-old Douglas Korpi, a slim, short-statured Berkeley graduate with a mop of pale, shaggy hair, was locked in a dark closet in the…

    Ben Blum on the Stanford Prison Experiment in Medium. The famous psychology experiment was apparently a sham, and yet it continues to inform criminal justice policy, education and more.

    It was a defining moment in what has become perhaps the best-known psychology study of all time. Whether you learned about Philip Zimbardo’s famous “Stanford Prison Experiment” in an introductory psych class or just absorbed it from the cultural ether, you’ve probably heard the basic story.

     

    The SPE is often used to teach the lesson that our behavior is profoundly affected by the social roles and situations in which we find ourselves. But its deeper, more disturbing implication is that we all have a wellspring of potential sadism lurking within us, waiting to be tapped by circumstance. It has been invoked to explain the massacre at My Lai during the Vietnam War, the Armenian genocide, and the horrors of the Holocaust. And the ultimate symbol of the agony that man helplessly inflicts on his brother is Korpi’s famous breakdown, set off after only 36 hours by the cruelty of his peers.

    There’s just one problem: Korpi’s breakdown was a sham.

    Some of the takeaway:

    The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel