Impacto redes sociales en la academia: interesante intercambio de puntos de vista
Julio 17, 2022

 

Is Twitter Making Academe Stupid and Mean?

Does social media destroy thinking or nurture community? Three professors weigh in.

JOAN WONG FOR THE CHRONICLE, JULY 11, 2022

 

“Did you see what happened on Twitter today?” For Twitter’s users, posters and lurkers alike, that breathless question might define the feel of the platform. “Users” is apt, because addiction is built into the website, or “the hellsite,” as the damned citizens of the Republic of Twitter say. To those not on Twitter, accounts of the latest blowup are opaque, inscrutable, and disturbing — byzantine descriptions of the court politics of some psychotic remote civilization in the midst of civil war. To the academics and journalists whose professional lives are most closely involved with the happenings in Birdland, the strife is of professional interest — if only to learn how to stay out of the way.

But it’s not all compulsive self-torture! What peruser of academic Twitter has not stumbled across some article or book at just the right time? And the platform’s promotional usefulness is undeniable — a Twitter account can be the difference between an article’s sinking into obscurity or its elevation into the higher atmosphere of fame. At least, it sometimes feels like it

We asked three scholars to take the measure of Twitter. For the prosecution, Katherine C. Epstein discusses what she sees as the platform’s formal incompatibility with the vocation of the scholar, and Irina Dumitrescu warns about its degrading effects on the way we think. For the defense, Rafael Walker speaks to the platform’s real possibilities for establishing community — and even lasting friendships.

Katherine C. Epstein | Irina Dumitrescu | Rafael Walker

Academic Twitter Puts the ‘Moron’ in ‘Oxymoron’

BY KATHERINE C. EPSTEIN

To peruse academic Twitter is to watch the crumbling of the Enlightenment in real time.

By “academic Twitter,” I mean primarily the use of Twitter by academics to comment publicly on colleagues and their work. Occasionally, the commentary is hyperbolically favorable: “See X’s characteristically brilliant latest piece.” More typically, the commentary is unfavorable. At its most extreme, it is entirely ad hominem: That is, it attacks other scholars without even the pretense of engaging with their work. In less extreme cases, tweets may offer a fig leaf of evidence — perhaps a sentence-length quotation, or a screenshot of a single paragraph. Within a day, the original tweet can generate dozens of comments and hundreds of likes, while spawning new threads. Picture a group of middle-schoolers pointing and laughing at a classmate, and you understand the basic dynamic.

The widening embrace of Twitter by academe has proceeded without any leadership by professional bodies, or at least without leadership as it is usually defined — sober, mature, responsible. Professional bodies have “led” in the sense that lemmings lead, which is to say unreflectively and without regard for potential consequences. For instance, the leading (or “leading”) professional organization in my discipline, the American Historical Association, declares in its “Guide for Dealing With Online Harassment” that historians “have the right to expect that discussion of their work as historians will be conducted in a civil manner, without the harassment and intimidation that mars much of public life in a digital age.” But in the event of harassment, it recommends turning for support to “your community of friends and colleagues” — as if only the benighted lay public, and not one’s ostensible colleagues, do the harassing. Alas, one cannot turn to the AHA itself for support: It disbanded its division for investigating complaints about violations of its “Statement on Standards of Professional Conduct” years ago. So much for professional self-governance.

Thus led by professional bodies, academic Twitter has largely evaded the sort of critical scrutiny that might be expected from scholars. Indeed, although exceptions can be found, the dominant genre of commentary by academics about academic Twitter is a techno-utopian bromide. It acclaims academic Twitter for enabling genuine intellectual exchanges with colleagues whom one would otherwise never know, greater public engagement, and so on. To the extent that there is any critique of this medium, it tends to focus narrowly on specific cases, rather than offering any analysis of the whole. The net result is to treat the majority of academic Twitter as essentially unproblematic. In fact, even relatively benign aspects of academic Twitter are highly problematic.

This is because Twitter represents the denial of the values that academe is supposed to represent.

To begin with, Twitter is designed to shortcut the critical thinking that we in academe claim to be teaching. Its very grammar — a sequence of 280 characters, or about 60 words — precludes the development of sustained, complex argumentation, the minimum unit for which is a paragraph. Tweetstorms don’t count; they don’t demand the structural rigor that a paragraph, essay, or book does. I tell my students, perhaps naïvely, that one reason to study history is that it builds the capacity for sustained, complex argumentation, which is to say, citizenship; and when I assign papers, I say that they are opportunities to strengthen this muscle. Meanwhile, Twittering academics are participating in a medium that causes this muscle to atrophy and even tear. Yay?

Next, the intermixing of professional credentials and personal identities in Twitter bios assaults the concept of expertise. The bios typically run something like this (I’m making this up, but I think I have the style right): “History prof @[insert handle here]. Lover of cats. Watcher of football. Likes strong bourbon, good beer, and long walks on the beach.” How can readers tell which part of my identity is responsible for which tweets, and how can they be blamed for inferring that inexpert aspects of my humanity influence the expressions of the expert one? If I tweet something critical of, say, Republicans, is it the history prof or the lover of cats speaking (or the bourbon)? Professional credentials are a shared resource. When individuals devalue the credential by abusing it, they devalue it for everyone.

Last but certainly not least, Twitter is a fundamentally unscholarly space, with no responsible editors and not even the pretense of peer review. It’s where one goes to self-publish, or, less generously, to mouth off about scholarly matters without any of those irritating checks and balances that scholarship mandates. Academe consists of scholarly disciplines, not scholarly do-whatever-you-wants. Is Donald Trump’s once and perhaps future favorite medium of communication a wise choice for academics, especially those on the left? In other contexts, that choice might be regarded as #accommodationism, not #resistance.

Twitter brings out the worst in some academics, as it does in others, because academics are people too (see: any faculty meeting, ever). Academics do not elevate the level of discourse on Twitter; Twitter debases the level of academic discourse. There’s a hubris in thinking that academics can somehow transcend Twitter’s essentially anti-academic, anti-intellectual biases, not unlike the hubris in thinking that using nuclear weapons “tactically” somehow makes them nonnuclear.

In theory, to be sure, Twitter no more determines the nature of its use than does any other technology. In practice, it would be difficult to imagine a technology less amenable to academic discourse than Twitter. Its algorithms discourage nuance, qualification, and reflection, the lifeblood of serious scholarship. They reward snark, willful misrepresentation in order to score points, and shooting from the hip — the enemies of serious scholarship. The ways in which Twitter channels behavior may not determine how people behave on it, but the channeling goes far toward explaining why some academics behave so disgracefully on it.

Twitter’s algorithms reward snark, willful misrepresentation in order to score points, and shooting from the hip — the enemies of serious scholarship

The nadir of academic misbehavior on Twitter is the academic-Twitter mob. For those who have never observed Twitter mobs form and go into action, consider yourselves lucky. The energy, like all mob energy, is crackling and dangerous. The victims of the mob don’t stand a chance, because the attack isn’t really about them: it’s about gratifying the psychological needs of the attackers. Twitter mobs are frightening enough. But they’re even more disturbing when formed by academics, who should be the last people to participate in mob behavior.

How do academics who join Twitter mobs not have a little voice inside of them that goes, hey, maybe I shouldn’t be doing this? Maybe the anger that’s coiling around me and the dopamine hit I’m getting is a bad sign, like the opposite of the reasoned analysis that I’m supposed to be engaging in? Maybe I should bother, at a bare minimum, to examine the alleged evidence before piling on? It should be impossible for academic Twitter mobs to form; it is a complete and utter indictment of academe that they do, and so frequently. To compound matters, there’s zero accountability for the perpetrators or justice for the victims, whose reputations have been assailed and their careers sometimes damaged.

In short, academic misbehavior on Twitter isn’t a bug of the system: it’s a feature. Academics who behave in ways contrary to academic values on Twitter are no more “misusing” Twitter than someone who shoots people “misuses” a gun. A gun can be used as a paperweight, but that’s not its intended purpose. Twitter is meant to be used in ways that trample on academic values.

I am aware that some academics behave much better than others on Twitter; perhaps some can even be said to use it responsibly. I appreciate that valuable intellectual exchanges might occur on Twitter. I know that some behavior on Twitter I object to is driven by economic pressures, not character flaws. I also understand all the arguments in favor of scholars engaging with a broader public. But my question is, at what cost — to the individual academics who get attacked on Twitter and have no effective remedy, to the collective community of scholars upon whom the online behavior of its members reflects, and to the broad public which we cannot serve if we destroy our credibility and the value of our professional credentials?

Next up: fake Amazon reviews!

Katherine C. Epstein is an associate professor of history at Rutgers University at Camden.

Time for a Long Pause

BY IRINA DUMITRESCU

Social media can have positive uses — this is how so many of us became hooked. If your time on social media is a net good in your intellectual, professional, and personal life; if it has given you a supportive community, irreplaceable intellectual resources, and a chance to find your voice; if you have done the reckoning and its benefits continue to outweigh its costs for you, then please stop reading this piece now. It is not written for you.

But perhaps you have growing doubts. You may have started to notice that Twitter is having negative effects on your work or your relationships. You may be counting the hours you spend scrolling, commenting, and sharing and wondering what else you could do with that time. If so, think about deleting your social-media accounts — or at least taking a substantial break from them.

There is a good chance that social media is making you less capable of sustained, serious, and independent thought. It has become a cliché at this point to say that the problem lies not in the networks themselves but in the algorithms they deploy to hold users’ attention for as long as possible. But some clichés are true. The type of content that is pushed by social-media algorithms in order to keep you addicted may be damaging the quality of your thinking. Here is what you stand to lose:

Attention. Unless you are extremely disciplined about how and when you use it, social media is very likely affecting your ability to concentrate for substantial periods of time on endeavors that are challenging and meaningful. These platforms are engineered to keep you checking your accounts so often that it becomes a habit. If you have ever absent-mindedly picked up your phone to check your notifications, or started typing the address of Facebook or Twitter into your browser without intending to do so, then you have become their ideal product — your attention already sold to advertisers. It is common these days to complain that students are unable to maintain their focus long enough to read a book or follow a lecture. But consider: Is your concentration what it was 10 or 20 years ago? Are you able to lose yourself in a book or a movie as often as you would like? Can you sit down and work on an essay or a research problem for an hour without interrupting yourself?

To be fair, Facebook, Twitter, and their ilk are not solely responsible for the difficulty many of us feel concentrating these days. Other websites and applications share their addictive qualities. YouTube (which some consider a social-media platform) and Netflix keep users hooked through streams of content adapted to their interests. News sites deliver craveable hits of novelty and excitement. Even checking email can become a habit that interferes with deep work. But with targeted content, tagging, direct messaging, constant notifications, and the dopamine boosts provided by likes and reshares, social media is particularly good at encouraging us to leave whatever we are doing and log on, again and again and again.

Depth. Once back on the platform, we are in a system that accustoms us to react and behave in a way that is antithetical to the habits of thought cultivated by higher education. Academic training aims, at least, to teach people to think slowly, deeply, and carefully. We teach our own students to pay attention to context, evaluate sources critically, and consider counterarguments and opposing views, even if it is to argue against them more effectively. In the best-case scenario, this kind of thinking produces ideas that move beyond what is already obvious to most people.

The trend on social media is in the opposing direction. Twitter, TikTok, and Instagram demand content that is short, fast, and shareable without context. It is true that some platforms, such as Facebook and LinkedIn, allow for longer posts and extended discussions. (Perhaps not coincidentally, these are generally seen as being for an older demographic.) But even platforms that allow for longer material are now heavily promoting short-form videos to serve decreasing attention spans. We may be able to see memes, sound bites, and hot takes for what they are: entertaining, provocative, disposable. But the more time we spend each day on social media, the more we are habituated to think in the forms it has given us. We might share a nascent idea before it is ripe, or offer a quick reaction to an issue before thinking it through. Brevity can be a virtue, but only when it is the result of discernment. What social media offers is the fast-food version of thinking.

Truth. Social media can be deceptively useful. We may feel that staying online keeps us informed. The problem is that the quality of information and discussion varies dramatically, even on a single platform. On Twitter, for example, you may be able to read commentary on current events from someone with hard-won expertise: a journalist reporting on a crisis on location, say, or a scholar who can put a new discovery in context. This is part of what makes that platform so appealing.

At the same time, those insights will be mixed in with unreliable or deliberately deceptive posts. Profiles you follow might share dubious claims out of habit or a desire to seem supportive. Many people retweet links to articles they have not read. Online, people regularly claim expertise they do not have, assume fake identities, or impersonate prominent figures. Malicious bots spread misleading claims and engage human users in conversation. Consider the efforts to which you go to teach your students how to base their research on reliable data and sources. Now ask yourself how much of your day you spend taking in questionable data from questionable sources, and what that is doing to your judgment. It is possible to curate a social-media feed that mainly delivers reliable, sound information, but that takes energy, attention, and time. Do you have so much of those resources that it is worth spending them in this manner?

Independence. Perhaps worst of all, social media encourages its users to embrace opinions popular in their circles without thinking them through. Sometimes this happens organically. People instinctively want to fit in with their peers, to be seen to be supporting the right causes, adopting the correct interpretation, supporting the right people. Academics like to think that they are above groupthink, but they are just as liable to it as anyone else. Sometimes people fall in line because of social pressure. In some circles it is common to demand immediate public statements from people on hot-button issues, and to assume that silence is tacit support for the opposing viewpoint. The costs of holding a different opinion from your peers are more immediate.

In this context, it is difficult for people to consider evidence and come to their own conclusions. There may be a diversity of ideas on social media, but because algorithms amplify posts that elicit strong emotions, it is the most extreme versions of those ideas that are boosted — and fast. It’s not just that nuance, ambiguity, and complexity are sidelined on these forums, though they are. It’s still possible to resist the urge to take a position on an issue thoughtlessly. But it is harder to come to an independent opinion when many in your circle have already lined up on one side, when you have read their answers before you even knew what the question was.

The negative emotions which social-media algorithms foster and reward undermine the very bonds these platforms once helped create.

Community. One of the great boons of social media has been connection. Scholars who might not have anyone at their own university with whom they can discuss their research can do so easily online. It is hard to forgo this benefit, especially if you have few colleagues in your geographic area. There are subtle costs, however, to moving scholarly communities online, rather than maintaining them through conferences, direct communication, or the now-ancient-seeming listserv. Put simply: the negative emotions which social-media algorithms foster and reward undermine the very bonds these platforms once helped create.

Envy has been an inevitable part of scholarly life since the beginnings of the university. There are angels among us who feel no twinge when they see a colleague thrive, who never feel sad because of someone else’s accomplishments. But most of us are not angels. These feelings are more manageable when we remember each other’s humanity. In a conversation, we get to know a real person. On social media, we see people become brands, advertising their new jobs, fellowships, and prizes.

What does it mean to build our communities on platforms we do not control? Social-media algorithms promote the incendiary, the sensational, and the outrageous. Imagine organizing a conference where the microphones are turned up every time someone says something critical, angry, or accusatory. Social media is also notorious for facilitating harassment, which disproportionately affects marginalized people. Traditional academic events are not free of harassment either, a problem scholarly associations have often been slow to deal with. But scholars still have much more influence over their own organizations than they do over the policies of Twitter or Facebook. As Jeffrey Lawrence has pointed out in these pages, “digital platforms not only reproduce the racial and gender biases of society at large; they often refuse to modify the algorithms that promote such biases if they believe that doing so will substantially detract from their bottom line.”

It has never been so difficult to pull back from social media. The Covid-19 pandemic has made it harder than ever to keep in touch with scholarly communities. Even before the pandemic there were reasons online networks seemed like an indispensable way to connect with colleagues: lack of research budgets, physical or practical inability to travel, or sheer distance. Add to this the growing pressure (real or imagined) on academics to become managers of their own public brands, constantly updating the world on their work.

But the use of social media remains a choice. The era of endless Zoom meetings is precisely the right time to try to save what is left of our ability to think with clarity, independence, and depth. If you find that being online is ruining your attention span, your relationships, or your ability to think and speak for yourself, consider taking a break. A long one.

Irina Dumitrescu is a professor of English medieval literature at the University of Bonn

 

A Vindication of Academic Twitter

BY RAFAEL WALKER

Alate adopter of Twitter, I joined three years ago, at the urging of one of my senior colleagues. It wasn’t an easy sell. I was initiated into the profession years before tweeting was in vogue for academics. While I had other forms of social media, Twitter — with its open-air format enabling anyone from anywhere to say anything to you — seemed forbidding. Besides, I protested, what on Earth could I say in 280 characters?

Won over by my colleague’s cogent argument about the importance of visibility — and his gentle reminder that I had a book coming out in a few years — I caved. And I’m glad I did. I’ll admit outright that I adore the site and couldn’t imagine leaving.

It was gratifying to learn, as I was preparing to write this, that I am hardly alone. I asked academics on Twitter why they liked it, and the response was overwhelming — a token both of the remarkable generosity of academic Twitter and of that community’s eagerness to opine. In less than 24 hours, I already had nearly 300 written responses to my post, many of them strikingly fervent. Most responses confirmed what I already believed, but a few opened my eyes to crucial affordances of the platform that I hadn’t fully grasped.

This year, though, has not done much to endear Twitter to academics, who have witnessed unprecedented deterrents to using the site from both outside and within the academy. On the one hand, Elon Musk’s rapidly unfolding plan to acquire Twitter has many concerned about privacy, and about whether or not the site will remain safe from political demagoguery and disinformation campaigns. On the other, prominent figures within the academy have voiced acerbic disapproval of tweeting academics. Joyce Carol Oates, for example — in a characteristically colorful tweet — described the site in this way: “Twitter is a haven for people who’d studied too hard while in school & are compensating by deteriorating in semipublic in adulthood.” (Oates is herself, it’s worth noting, a frequent tweeter.)

Less flippantly, David Bromwich, in a recent interview with The Review, asseverated that tweeting “goes against the vocation of being a scholar” and predicted that “it’s going to reduce the prestige of professors” by making us seem “more like everyone else.” Assessing the platform with a more even hand, Irina Dumitrescu — in an interview stemming from her earlier essay in these pages condemning “professorial groupthink” — worries about the ease with which such avenues as Twitter enable toxic academics to disseminate their toxicity, their defamations and other kinds of abuse.

For my part, the risks exercising Twitter’s critics are much less concerning. For one, companies have been collecting and profiting from our personal information for a very long time, and Twitter, whether under Musk’s leadership or someone else’s, is anything but unique in this regard. We all voluntarily offer up what amounts to reams of information about ourselves with virtually every keystroke and click. Moreover, the demagogues have shown that they will find venues for their rabble-rousing, no matter what (hence the proliferation of social-media platforms after Trump’s ejection from the site).

As to the toxicity question — in my view, the most legitimate criticism of Twitter — Dumitrescu provides the counterargument for us: “narcissists and sociopaths,” as she points out, will exist no matter what, and they will find some way or another to inflict the harm that their egos crave. I would round out her observation by noting the fact that Twitter has both a “mute” and a “block” option (both of which I use liberally), tools helping users to avoid injurious or annoying people more efficiently than is possible in face-to-face interactions.

The courage to descend from the ivory tower and expose the full range of your humanity is admirable, a welcome corrective to the snobbish aloofness that so many of us have learned to cultivate.

Finally, Oates’s and Bromwich’s fear that tweeting academics risk embarrassing themselves or — Heaven forfend! — seeming too much like the common folk demonstrates how easily the adjective elite can slip into its ugly cousin, elitist. To me, the courage to descend from the ivory tower and expose the full range of your humanity is admirable, a welcome corrective to the snobbish aloofness that so many of us have learned to cultivate. Besides, only small, ungenerous minds would impugn someone’s scholarship solely on the basis of a dog photo or a lighthearted Twitter rant against mayonnaise. Such people ought not be pandered to.

Maybe academic Twitter isn’t as bad as many have suggested, but is it any good? I’m convinced that it is, and here’s why.

The professional networking benefits are unparalleled, and this advantage alone makes Twitter worth it. Precisely because of its open-air format, Twitter brings people to your attention, and you to theirs, with whom you would be very unlikely to have sustained contact otherwise. Sheerly through tweeting, for example, I’ve accrued important professional allies, received invitations to coveted panels and other speaking engagements, and been offered excellent publication opportunities (including a prospective publisher for my first book).

Those are some of the reasons I encourage friends and mentees not only to join the site but also to tweet regularly. It’s not enough to be a “lurker,” the craven voyeur who logs on to take but never share. This habit, while safer, forfeits one of the greatest benefits of the site — its capacity to put you on the radars of potential collaborators or editors. You can’t appear on anyone’s radar if you are invisible. To be seen, you must exhibit.

I understand that many academics, especially more established ones, may find the platform’s rampant incivility and half-baked diatribes enough to tilt the scales in favor of abstention. This is no trivial concern. The internet is, generally, an uncivil place. Add to that the psychological finding that it requires three good cognitions to counteract just one bad one — that pleasant inputs to the brain have only a third of the staying power of bad ones — and it’s easy to grasp why some struggle to shake the Twitter jitters.

But the site can also yield fulfilling, long-term connections, and it is worth asking whether the transient discord that one encounters on Twitter — discord scarcely less prevalent elsewhere in academic life — is worth depriving oneself of the possibility of lifelong friendships. After all, we do not apply this logic in other areas of our lives. We have one-night stands, go on dates, and get married fully aware that so much could go wrong but optimistic that so much could go right. Sociability is risky, wherever it happens.

But we have a choice in how we approach those risks. We can take Sartre’s cynical view that “hell is other people” (indeed, “hellsite” is a favorite epithet for Twitter’s detractors). Or we can leap headlong into social life, social media included, with Tennyson’s sanguineness, accepting that “’Tis better to have loved and lost than never to have loved at all.”

I have chosen the Tennyson way, and I don’t regret it one bit. Despite Twitter’s wet blankets (most of whom I have blocked or muted), I have forged relationships that I couldn’t imagine having forged otherwise. When I accepted my job in New York, for example, I had plenty of friends in the city but knew virtually no gay academics in the area, and, much as I loved my existing friends here, I yearned for more simpatico company — friends who would understand what I meant when I referred to the overserved 20-something at the bar as Lydia Bennet.

Then 2020 threw an enormous wrench into the prospect of hanging out with strangers. But Twitter helped to pry out that wrench. I connected with two brilliant gay English professors around my age, we met for drinks and French fries in real life, and they are now so close to me that I can hardly remember not knowing them. Since then, I have continued to meet people from all over the country — for meals, dancing, Zoom-based happy hours, rooftop hangs — and even started a remote reading group focusing on Moby-Dick and Middlemarch with two scholars (now cherished friends) whom I had never met outside Twitter but now see almost weekly.

My story isn’t unique. I know many other academics who have developed friendships on Twitter that might not have stood much chance without the site. I have also come to understand that Twitter has been a lifeline not only for scholars socially starved by pandemic-related restrictions but also for those belonging to groups historically shut out for all kinds of structural reasons. For disabled scholars, the accessibility benefits afforded by Twitter defy enumeration. In discussing these benefits with two scholars of disability studies, Jason S. Farr and J. Logan Smilges (both initially Twitter friends, incidentally), I learned that Twitter enhances access significantly for hearing-impaired, mobility-impaired, and neurodivergent people. And it provides a forum for scholars with disabilities to build communities — in which they can, among other things, figure out ways of navigating our deeply ableist profession.

Scholars from other underrepresented backgrounds reap similar benefits from the platform. Often demographic loners in our departments (if not our entire institutions), many of us in the minority find on Twitter a source of reassurance that we are not alone in the profession and that we belong here.

Graduate students often feel less daunted corresponding with more-senior scholars on Twitter than they might by email, and, when they tweet civilly, they have the potential to form career-launching connections. Scholars at small, far-flung colleges — where, within a prohibitively wide radius, they may be the only one in their specializations — are better able to keep up with their scholarly communities.

Trained to be critical, and to believe that enthusiasm is weakness, academics are endemically uneasy about expressing fondness for anything, much less a corporatized site enjoyed by the masses. The profession’s tacit norms conspire to turn each of us into a Pococurante, the world-weary Italian senator of Voltaire’s Candide almost nihilistic in his bloodless detachment from the world’s delights. But, at least so far as Twitter is concerned, I refuse this perverse posture. A resource that unites me with inspiring, like-minded people from across the globe; that lets me evade voices noxious to my well-being; that delivers collaborations and contracts to my doorstep; that opens the profession to people unjustly excluded from it and from whom I want to hear — that, to me, seems about as good as it gets. So, until the site ceases to pay such prodigious dividends, you can continue to tweet me @raf_walk.

Rafael Walker is an assistant professor of English at Baruch College of the City University of New York.

 

 

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Bill Gates entrevista

Bill Gates: the Optimist’s Dilemma The philanthropist has been fighting global disease for 25 years. He believes the world is at a dangerous tipping point. By Jason Cowley,, The New Statesman Interview, London, 27 November 2024 Bill Gates, photographed in London for...

Universidades en Francia

Thierry Coulhon: France’s mega-universities ‘had to happen’ Former Macron adviser, president of leading engineering institution IP Paris, believes his academics are embracing the Idex model Jack Grove,  November 26, 2024 It is almost a decade and a half since Thierry...

Share This