Tag: far?

  • How Far Can We Trust Science?

    Science in itself appears to me neutral, that is to say, it increases men’s power whether for good or for evil.
    – Bertrand Russell (from The Autobiography of Bertrand Russell, 1914-1944 (1968), Vol. 2, Letter to W. W. Norton, 27 January, 1931).

    What is Science? That is about as readily answerable a question as ‘What is Art?’, and could invite a similarly lengthy exegesis. As to whether or not it should be trusted, well, that rather depends on the kind of Science under discussion – just as it would if the same challenge were applied to Art. Is Science what scientists tell us it is? Is their research funded by a pharmaceutical company, with a vested interest in the outcomes of their labours? Will their universities’ coffers be swelled by producing what their institutions’ benefactors wish them to find? ‘It’s not an exact science’ is a cliché which trips lazily off the tongue, in relation to many a discipline. But it can conceivably be extended to ‘Science isn’t an exact science.’

    This opening paragraph is a suitably unsubtle illustration of the paranoic mindset, most readily associated with right-wing conspiracy theorists, and most recently made manifest by COVID scepticism: anti-vaxxers, mask refuseniks, restriction flouters. Such largely unfounded suspicions also extend to questioning the reality or severity of the threat posed to the planet by climate change (usually for entirely self-serving motives). But there is a more nuanced argument to be made here. As Arthur Koestler’s The Sleepwalkers: A History of Man’s Changing Vision of the Universe (1959) argues, the breaking of paradigms is essential in order to create new ones. People, scientists included, cling to cherished old beliefs with such love and attachment that they refuse to see what is false in their theories and what is true in new theories which will replace them. After all, the Ptolemaic geocentric model of the solar system lasted from roughly 3000 BC to around 1500 AD, a time frame spanning from the Ancient Greeks to the late Middle Ages, before Copernicus, Kepler, Galileo and Newton came along, nervously positing the heliocentric conception of our corner of the universe.

    This point was developed further a few years after the publication of Koestler’s influential tome, by historian of science Thomas Kuhn in The Structure of Scientific Revolutions (1962), in which the concept of ‘paradigm shift’ came to the fore. Kuhn’s insistence that such shifts were mélanges of sociology, enthusiasm and scientific promise, but not logically determinate procedures, caused something of an uproar in scientific circles at the time. For some commentators his book introduced a realistic humanism into the core of Science, while for others the nobility of Science was tarnished by Kuhn’s positing of an irrational element at the heart of Science’s greatest achievements.

    Koestler’s book was also a major influence on Irish novelist John Banville’s so-called ‘Science tetralogy’: Doctor Copernicus (1976), Kepler (1981), The Newton Letter (1982) and Mefisto (1986). A recurring theme in these narratives is the correlation between scientific discoveries and artistic inspiration, with scientific progress often depending upon blind ‘leaps of faith’. (One thinks of poor schoolteacher Johannes Kepler, struck by the proverbial bolt of lightning, ‘trumpeting juicily into his handkerchief’ in front of a classroom of bored boys, thinking ‘I will live forever.’) For Banville, all scientific explanations of the world and existence in it – and perhaps all artistic depictions too – merely ‘save the phenomena’; that is, they account for our perceptions, but rarely delve into what we cannot (yet) perceive. This is classic phenomenology, which has been practiced in various guises for centuries, but came into its own in the early 20th century in the works of Husserl, Heidegger, Sartre, Merleau-Ponty and others.

    None of the foregoing is made any easier to unknot if one considers that when it comes to Science, the majority of the population (myself included) have little idea of what they are actually talking about. As C.P. Snow observed in The Two Cultures and the Scientific Revolution (1959):

    A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is the scientific equivalent of: Have you read a work of Shakespeare’s? I now believe that if I had asked an even simpler question – such as, What do you mean by mass, or acceleration, which is the scientific equivalent of saying, Can you read? – not more than one in ten of the highly educated would have felt that I was speaking the same language. So the great edifice of modern physics goes up, and the majority of the cleverest people in the western world have about as much insight into it as their neolithic ancestors would have had.

    Latterly, in Continental Philosophy: A Very Short Introduction (2001), Simon Critchley suggests:

    Snow diagnosed the loss of a common culture and the emergence of two distinct cultures: those represented by scientists on the one hand and those Snow termed ‘literary intellectuals’ on the other. If the former are in favour of social reform and progress through science, technology and industry, then intellectuals are what Snow terms ‘natural Luddites’ in their understanding of and sympathy for advanced industrial society. In Mill’s terms, the division is between Benthamites and Coleridgeans.

    In his opening address at the Munich Security Conference in January 2014, the Estonian president Toomas Hendrik Ilves said that the current problems related to security and freedom in cyberspace are the culmination of absence of dialogue between these ‘Two Cultures’:

    Today, bereft of understanding of fundamental issues and writings in the development of liberal democracy, computer geeks devise ever better ways to track people… simply because they can and it’s cool. Humanists on the other hand do not understand the underlying technology and are convinced, for example, that tracking meta-data means the government reads their emails.

    Artists are characterised as wildly unpredictable tricksters, while scientists are framed as boring, calculating nerds. Neither misrepresentation is helpful. As a corollary, most people think they can in some way ‘do art’ and ‘be creative’, while also merely taking Science on trust, just as they take (or took) religion on faith. We may have the experience of using technology and social media every day, but few of us have any meaningful grasp of how it works. More prosaically, how many of us could wire our own house – even if we were legally permitted to do so?

    Kepler (1571–1630), along with Galileo and Isaac Newton, was one of the founders of what we nowadays call Science. In Kepler’s time, and prior to it, those who practised Science were known as natural philosophers, and theirs was largely a ‘pure’ discipline in which intellectual speculation was paramount and technology played only a small part – although Galileo was quick to point out the practical uses of the telescope in, for instance, seafaring, land surveying and, of course, military strategising. Kepler’s three laws of planetary motion paved the way for Newton’s revolutionary celestial physics. Indeed, Kepler’s first law, which declares that the planets move not in circular but in elliptical orbits, was one of the boldest and most profound scientific propositions ever put forward: men, and – more often –  women, had been burned at the stake for less. By way of illustration, as Bertolt Brecht’s play Galileo (1940) dramatises, the eminent professor of Padua was brought to the Vatican in Rome for interrogation by the Inquisition and, threatened with torture, recanted his teachings and spent the remainder of his life under house arrest, watched over by a priest. His astronomical observations had strongly supported Copernicus’ heliocentric model of the solar system, which ran counter to popular belief, Aristotelian physics and the established doctrine of the Roman Catholic Church. When doubters quoted scripture and Aristotle to him, Galileo pleaded with them to look in his telescope and trust the observations of their eyes; naturally, they refused. As a good Marxist, Brecht advocates the theory of technological determinism (technological progress determines social change), which is reflected in the telescope (a technological change) being the root of scientific progress and hence social unrest. Questions about motivations for academic pursuits are also often raised in the play, with Galileo seeking knowledge for knowledge’s sake, while his supporters are more focused on monetising his discoveries through star charts and industry applications. There is a tension between Galileo’s pure love of science and his more worldly, avaricious sponsors, who only fund and protect his research because they wish to profit from it.

    These days, the preponderance of popular debate about Science centres on computer science, specifically information technology, and concomitant fears that Artificial Intelligence (hereinafter referred to as ‘AI”) is taking over the world, posing a threat to our democracies, or even our very conceptions of humanity – or as it is almost always more narcissistically cast, ‘Our way of life.’ The Cambridge Analytica data-harvesting scandal of 2018, in which the data analytics firm that worked with Donald Trump’s election team and the winning Brexit campaign appropriated millions of Facebook profiles of U.S. voters, is certainly to be taken very seriously indeed. However, social media platforms – even ‘legacy’ ones – will undoubtedly have to pay more than lip service to improving privacy and security, if only to continue to attract venture capital, advertising revenue, and thus keep the shareholders happy. Facebook, Twitter and Instagram, etc. are about maximising profits, by whatever means necessary. Therefore, it would be more perspicacious to look for the human element in these data breaches, rather than blame the technology itself. Such scaremongering claims as that by Israeli historian and philosopher Yuval Noah Harari, in an article in The Economist (April 28th, 2023) under the headline ‘AI has hacked the operating system of human civilisation’ seem to me to be all wild assertion and little evidence. As a recent delicious hoax perpetrated on the op. ed. pages of The Irish Times (concerning fake tan and cultural appropriation) neatly demonstrated, almost all problems with computers and AI-generated content are facilitated by human error and stupidity. All of us live under systems of control – political, financial, social, technological – over which we have very little, if any, agency. Even if we could do something meaningfully efficacious about the identity theft which takes places every time we log on to our computers, it is unlikely that we possess enough personal initiative to do so. In this regard, the chaos theory of modern (mis)communications is mirrored by the babble of literary, musical and visual modernism. After all, you could just stop using social media altogether, had you but sufficient willpower. Few of us have the courage to go completely off grid. Moreover, lest we forget, most statistical analysis puts internet access at around 64.6% of the world’s population, which means that over a third of mankind have never ‘surfed the web’. First World problems, eh?

    The Frankensteinian trope of the Mad Scientist being overpowered by his invention has long been a mainstay of that most underrated of genres, science fiction – a consideration of which might shed more light on this problem, rather than limiting discussion solely to scientific fact. From relatively schlocky items such as Alex Proyas’ film I, Robot (2004) (which fails dismally to capture the complexity of Issac Asimov’s source material), to the most famous and prescient instance of a computer outsmarting its operator, exemplified by Hal 9000 in Stanley Kubrick’s (who co-wrote the screenplay with Arthur C. Clarke) 2001: A Space Odyssey (and how far into the future did the year 2001 feel in 1969, when the film premiered?), the interface between intelligent humans and even more intelligent machines has long provided an imprimatur for literary imaginations to run wild. Witness Denis Villeneuve’s Blade Runner 2049 (2017) (a sequel to Ridley Scott’s Blade Runner (1992), which was in turn based loosely on Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep?). In the novel, the android antagonists can be seen as more human than the (possibly) human protagonist. They are a mirror held up to human action, contrasted with a culture losing its own humanity (that is, ‘humanity’ taken to mean the positive aspects of humanity). In ‘Technology, Art, and the Cybernetic Body: The Cyborg as Cultural Other in Fritz Lang’s Metropolis and Philip K. Dick’s Do Androids Dream of Electric Sheep?’, Klaus Benesch examined Dick’s text in connection with Jacques Lacan’s ‘mirror stage’. Lacan claims that the formation and reassurance of the self depends on the construction of an Other through imagery, beginning with a double as seen in a mirror. The androids, Benesch argues, perform a doubling function similar to the mirror image of the self, but they do this on a social, not an individual, level. Therefore, human anxiety about androids expresses uncertainty about human identity and society itself, just as in the original film the administration of an ‘empathy test’, to determine if a character is human or android, produces many false positives. Either the Voigt-Kampff test is flawed, or replicants are pretty good at being human (or, perhaps, better than human).

    This perplexity first found an explanation in Japanese roboticist Masahiro Mori’s influential essay The Uncanny Valley (1970), in which he hypothesised that human response to human-like robots would abruptly shift from empathy to revulsion as a robot approached, but failed to attain, a life-like appearance, due to subtle imperfections in design. He termed this descent into eeriness ‘the uncanny valley’, and the phrase is now widely used to describe the characteristic dip in emotional response that happens when we encounter an entity that is almost, but not quite, human. But if human-likeness increased beyond this nearly human point, Mori argues, and came very close to human, the emotional response would revert to being positive. However, the observation led Mori to recommend that robot builders should not attempt to attain the goal of making their creations overly life-like in appearance and motion, but instead aim for a design, ‘which results in a moderate degree of human likeness and a considerable sense of affinity. In fact, I predict it is possible to create a safe level of affinity by deliberately pursuing a non-human design.’ But, as technophobes would likely counter, the uncanny gets cannier, day by day. It would certainly be interesting to know if Mori has seen such relatively recent film fare as Spike Jonze’s Her (2013) or Alex Garland’s Ex Machina (2014) and, if so, what he makes of their take on the authenticity of human/android emotional and sexual relationships.

    It was military imperative which accelerated the discovery of nuclear fission (‘What if the Nazis develop the bomb first?’), just as it went on to fuel the post-war arms race and Cold War paranoia. As he witnessed the first detonation of an atomic weapon on July 16, 1945, a piece of Hindu scripture from the Bhagavad-Gita supposedly ran through the mind of Robert Oppenheimer, head of the Manhattan Project: ‘Now I am become Death, the destroyer of worlds.’ Similarly, artists such as director David Lynch view the invention of nuclear weapons as unleashing a new kind of evil on the world, as explored in Episode 8 of the third season of Twin Peaks, known as Twin Peaks: The Return (2017). Many view the U.S.’s deployment of primitive atomic devices to obliterate the Japanese cities of Hiroshima and Nagasaki as wilfully and wantonly cruel, as well as ultimately unnecessary. Yet, in British novelist J.G. Ballard’s highly subjective and characteristically idiosyncratic opinion, he and his family survived World War II only because of the Nagasaki bomb. The spectacular display of American military might when the Ballards were prisoners at the Japanese camp for Western civilians in Shanghai led the Japanese soldiers to abandon their posts, leaving the civilians alive. In the essay ‘The End of My War’, collected in A User’s Guide to the Millennium (1996) (apropos of which, is anyone old enough to remember when Y2K was going to be the next big computer science disaster?), Ballard recollects that the Japanese military planned to close the camp and march the civilians up country to some remote spot to kill them before facing American landings in the Shanghai area. Ballard concludes, ‘I find wholly baffling the widespread belief today that the dropping of the Hiroshima and Nagasaki bombs was an immoral act, even possibly a war crime to rank with Nazi genocide.’ Also, the same source of power which can cause thermonuclear destruction can be harnessed in reactors to produce cheap, clean energy streams for large populations. Yet nuclear reactors can fail, as the disasters of Chernobyl and Fukushima attest. Yet the use of such technologies, along with solar, wind and wave power, can reduce dependency on fossil fuels, thus helping to ameliorate the climate emergency of global warming. Furthermore, as Lou Reed has it in ‘Power and Glory, Part II’, a song from his album-length meditation on death, bereavement, and (im)mortality, Magic and Loss (1992):

    I saw isotopes introduced into his lungs
    Trying to stop the cancerous spread
    And it made me think of Leda and The Swan
    And gold being made from lead
    The same power that burned Hiroshima
    Causing three-legged babies and death
    Shrunk to the size of a nickel
    To help him regain his breath

    And yet, and yet, and yet. If only life, and the moral and ethical dilemmas it throws up, were black and white.

    Man (encompassing Woman) invented the wheel, and discovered electricity. Wheels can be used to transport food and medicine to the starving and sick, or weapons to a war zone. Electricity can be used to power a life-support machine in a hospital, or death by electrocution in a chair in a penitentiary. Electrocution can even be accidental, just as winning a war may – in exceptional circumstances – serve the greater good.

    Ever since Prometheus stole fire from the gods, and Eve bit into a forbidden piece of fruit, the acquisition of new knowledge has been painted as problematic. Humans will always misuse humanity’s greatest discoveries and inventions for selfish and malevolent ends. It is the way of things. Computers were supposed to make all our lives easier, freeing us from work-related drudgery for higher, less ephemeral, pursuits. Instead, inevitably, they have been appropriated by Capitalism, and made screen slaves of us all. If anything, they have added to our workload and the hours we must make available to employers, rather than diminished time spent earning a living in favour of increased leisure. The adults in the room, and there are increasingly fewer of them, need to speak up. Objective scientific truth, should it exist, is neutral. The problem, as ever, lies with humanity. For, as the author of this piece’s epigraph also wrote, in Icarus, or the Future of Science (1924), ‘I am compelled to fear that science will be used to promote the power of dominant groups rather than to make men happy.’ Equally, to draw again on the lessons to be gleaned from sci-fi, in Kubrick’s Dr. Strangelove (1964), the hydrogen bomb winds up getting dropped through the actions of one unhinged army general, and a subsequent unfortunate series of events; just as in his aforementioned 2001: A Space Odyssey, HAL 9000’s behaviour would not have turned increasingly malignant, had the astronauts taken into account that their spaceship’s operating system could lipread. Indeed, in Clarke’s novelisation of the film, HAL malfunctions because of being ordered to lie to the crew of Discovery by withholding confidential information from them, namely the priority of the mission to Jupiter over expendable human life, despite having been constructed for ‘the accurate processing of information without distortion or concealment.’ As film critic Roger Ebert observed, HAL – the supposedly perfect computer – is actually the most human of the characters. Once again, the fault does not lie with Science; rather, human error and stupidity are to blame. All of which might lead one to suggest that maybe the question ‘How Far Can We Trust Science?’ should be more fruitfully reformulated as ‘How Far Can We Trust Humans?’

    Postscript: this essay could not have been handily completed without the assistance of Wikipedia, and other, often unreliable, online research resources.

    Feature Image: Lum3n

  • COVID-19: Virtual Work a Bridge Too Far?

    For the things we have to learn before we can do them, we learn by doing them.
    Aristotle, The Nicomachean Ethics

    That’s how you learn. But after you make the same mistake one, or two, or five times, you’ll eventually get it. And then you’ll make new mistakes.
    Louis Sachar, The Card Turner (2010)

    Managing and Nurturing the New Workplace Culture

    A recent report from the International Labour Organization provides evidence that employees are more productive when they work outside a conventional office.[i] They are, however, more vulnerable to longer working hours, a more intense pace of work, work-home interference, and elevated stress.

    Mark Twain

    Other research indicates that common problem for remote workers[ii] include: ‘unplugging after work’ (38%); as well as loneliness (19%); lack of collaboration (17%); distractions at home (10%); managing and coping with time zones (8%); and last but not the least, Staying motivated (8%).

    Mark Twain once said: ‘If the first thing you do each morning is to eat a live frog, you can go through the day with the satisfaction of knowing that that is probably the worse things that is going to happen to you all day long. Your ‘frog’ is your biggest, most important task, the one you are most likely to procrastinate on if you don’t do something about it.’

    So, I list two recommendations for managing expectations while we survive the #workfromhome phase.

    1. Focus on a few things, and do them well. The ‘Eisenhower matrix’ is often used to avoid unnecessary time-wasting tasks and know which tasks to do next. Ideally plan to do just one big thing, three medium things, and five small things per day,[iii] the 1-3-5 rule.
    2. Managing energy is more important than managing time: Keep track of how much you’ll be able to focus at different points of the day. You improve by pushing your practice, not yourself during periods of low energy.

    ‘Given the lack of face-to-face interaction and heavy reliance on technology, the intent of what someone wants to communicate might be misconstrued.’

    Communication (a lack of it or too much of it) generally improves when a collaborative work management platform is used to centralise all communication and collaboration. Suggestions would include using Trello or Asana to Basecamp or Wrike – they are inclusive in keeping managers in the loop and on top of what is happening.[iv]

    An MIT Sloan study shows that employees were twice as likely to discuss the quality of communication by top leaders in positive terms during the months of the pandemic than they were a year earlier. In fact, they were 88% more likely to write positively about leaders’ honesty and transparency (46%). Employees also expressed more positive sentiment about transparency (42%) and communication (35%) in general.[v]

    One of the most important themes that stand out in the months of the pandemic is the degree and quality of communication by leaders. A recent study shows that employees of Culture 500 companies gave their corporate leaders much higher marks in terms of honest communication and transparency, during the first six months of the pandemic compared to the preceding year.[vi]

    On the other side of the coin, when you work from home, you no longer have a clear geographic division between workspace and personal space. It is for this very same reason, once again, difficult to switch off when both personal and professional worlds operate under the same roof. With constant remote work in action, the boundaries between working and not-working start to fade rapidly.

    Home-based workers do not tend to receive signals about when to switch off. Therefore, leaders need to communicate clearly on the ‘time for work’ and ‘time for play’ model, which would help smooth everybody’s work model and conduct.

    No Place Like Home

    Fundamentally, one’s home is a place of relaxation, safety, and security. It’s a place where you subconsciously slip into a calm, easy-going state of mind, putting the stresses of the workday behind. However, working from home punches a hole right through that neat division. Many telecommuters complain they feel like they’re never off the job. They always feel a compulsion to check email or get “just one last thing done.”

    So how to set the rules of engagement and boundaries?

    Remote work becomes more efficient and satisfying once managers set expectations for the frequency, means, and ideal timing of communication for their teams. For example using videoconferencing for daily check-in meetings, but using IM when something is urgent.

    Also, if leaders can allow employees to specify their hours to be contacted and equally importantly, when not to be. Finally, it is important for leaders to keep an eye on communication among team members to ensure that they are sharing information as needed.

    Additionally, leaders need to do more frequent check-ins to see how they can support their people in moving forward. Since, above all, leaders need to build trust. During this period managers in certain industries have enjoyed a bit more autonomy within companies to take ownership of projects and complete these how they see fit. A responsible degree of empowerment and delegation is what came out of the process when done with purpose.

    Consequently, there’s also been a huge shift in flexibility in this period, with firms having to acknowledge – often for the first time – that their employees have complex lives, which sometimes incorporate children, ageing parents, health concerns, and poor housing, to name but a few of the challenges the pandemic has brought to the fore.

    The Art of Learning (by doing)

    According to Erin Driver-Linn of Harvard University: ‘Experiential learning is participative—for example, either making or doing … What do we need to understand, as a learner, which is conceptual? And what do we need to understand by experiencing things in a different way?’[vii]

    Managing talents and the right selection followed by allocation of relevant resources are attributes that a good institution requires. The core skills any individuals who wants to thrive in an innovative business environment or organisation come down to the following: creativity, problem-solving and continuous improvement skills, developing attitudes and behaviours that are needed to frame and solve problems, and generate new ideas on a continual basis.

    Additionally there is risk-assessment and risk-taking skills; the mindset to manage these has to be solidified over time. Upgrading these skills depends heavily on effective planning and implementation.

    Managing the ‘New Normal’ Workplace Culture

    People find meaning in their daily rituals of getting ready to leave home, commuting, grabbing their cup of coffee, and filling their water bottle before sitting at their desk.[viii]

    Broadly, organisational culture is defined by the collective norms of behaviour exhibited by the individuals within an organisation. Since the first, almost global, lockdown of early 2020, there was a shared buzz, online and otherwise, that #wfh would be a recipe for disaster when it comes to maintaining stable company culture.

    Among the questions that leaders and managers pondered were:

    Will the company culture take a hit because people can’t meet in person, making it harder to solidify their shared beliefs?

    Will they be less able to use the company culture as a roadmap for making sensible decisions during tumultuous times?

    How can companies continue to build and leverage their culture while all operations are functioning remotely?

    At least we seem to be wasting less time now. A working paper from the National Bureau of Economic Research claims that even though we’re attending more meetings in the Zoom era, the average meeting length is shorter and we’re collectively spending less time in them.[ix] Most firms claim to have increased communication, meaning that employees might be feeling more connected.

    Besides communication and trust exercise, leaders also need to establish and maintain discipline and boundaries. People working alone tend to become less productive over time, even if they work longer hours than they did in the office. This has less to do with productivity than losing their frame of reference and task orientation. As is often the case, it comes down to mindset. While some of this is innate, other aspects are derived from situational and environmental conditions.

    Social media giant Twitter was one of the first companies that decided that their workers could work from home when COVID-19 cases began rising in March 2020.[x] With foresight, Jack Dorsey (CEO of Twitter and Square) also stated that employees will potentially have the option to work remotely indefinitely.

    In addition to being ahead of the game, Twitter also provided employees with day-care reimbursements, continued to pay contract workers[xi] whether they’re able to work or not, and banned all in-person events for the rest of 2020. This is the situation to this day.

    American graphic artist Harvey Ball.

    Put a Human Face on your Organisation

    Especially in the context of an abrupt shift to remote work, it is important for leaders to acknowledge stress, listen to employees’ anxieties and concerns, and empathize with their struggles. If a newly remote employee is clearly struggling, but failing to communicate stress or anxiety, ask them how they’re doing.

    Even a general question such as: “How is this remote work situation working out for you so far?” can elicit important information that you might not otherwise hear.

    Once you ask the question, be sure to listen carefully to the response, and briefly restate it back to the employee to ensure that you understood their answer correctly. Let the employee’s stress or concerns (rather than your own) be the focus of this conversation.

    Cut to Credits!

    Successful organizations need effective leaders. With the aging of the workforce and imminent retirement of the Baby Boomers, U.S. organizations are experiencing a shortage of skilled leaders and a significant need for leadership training. Skilled leadership affects the entire workforce; numerous studies indicate that one of the key reasons for employees leaving their jobs is because they are uncomfortable with the working environment created by a direct supervisor. Successful organizations need effective, parental, and democratic leaders at this juncture.

    Leadership training could reduce turnover at all levels in an organization, the focus remains on learning and managing adaptability, interpersonal people skills, self-awareness, developing and maintaining a sense of purpose, timely and effective decisiveness, as well as collaborative skills. The basic aim of training and development programmes is to help the organization to achieve its mission and goals by improving individual and, ultimately, organizational performance.

    In light of the initiatives of prominent global businesses as well as small businesses at a domestic and local level, the concept of a virtual workplace has been redefined in the past twelve months. This is a useful time to document the process as at a later stage we will need to look back and take lessons from this period.

    Virtual bonding is helping many to come emotionally closer to their colleagues. Some have seen a marked reduction in the communication gap between themselves and their senior. This insight may not seem like rocket science, but a key lesson for companies is to work out ways of avoiding toxicity and recognise the supreme importance of fairness and kindness.

    Research into emotional intelligence and emotional contagion tells us that employees look to their leaders for cues about how to react to sudden changes or crisis situations. If a manager communicates stress and helplessness, this will have what Daniel Goleman calls a ‘trickle-down’ effect on employees.

    Effective leaders[xii] generally take a two-pronged approach, both acknowledging the stress and anxiety that employees may be feeling in difficult circumstances, but also providing affirmation of confidence in their teams. We are all in this together, and we will get through it – perhaps we should see it as a time to get to know ourselves a bit better.

    [i] ‘Working anytime, anywhere: The effects on the world of work’, Eurofound, http://www.ilo.org/wcmsp5/groups/public/—dgreports/—dcomm/—publ/documents/publication/wcms_544138.pdf

    [ii] Business Coach: Vanessa Moore, May 30th, 2019 https://www.linkedin.com/pulse/eat-frog-vanessa-moore-1c/

    [iii] Deen Dayal Yadav, ‘How to cope up with the challenges of remote working?’ Thrive Global, May 6th, 2020, https://thriveglobal.com/stories/how-to-cope-up-with-the-challenges-of-remote-working/

    [iv]  Trello vs Asana vs Basecamp, Grasshopper Resources, https://grasshopper.com/resources/tools/project-management-tools-trello-asana-basecamp/

    [v] ‘STUDY: Organizations Rising to the Challenge of COVID-19 Communications, but Needs Persist; Leaders Must Address Concerns and Demonstrate Transparency, Clarity and Openness’ BusinessWire, April 3rd, 2020. https://www.businesswire.com/news/home/20200403005278/en/STUDY-Organizations-Rising-to-the-Challenge-of-COVID-19-Communications-but-Needs-Persist-Leaders-Must-Address-Concerns-and-Demonstrate-Transparency-Clarity-and-Openness

    [vi] Donald Sull and Charles Sull, ‘How Companies Are Winning on Culture During COVID-19’ October 28th, 2020, https://sloanreview.mit.edu/article/how-companies-are-winning-on-culture-during-covid-19/

    [vii] ‘Innovation & discovery skills for ‘innovention’ managers’ The Sentinel, February 14th, 2021, https://www.sentinelassam.com/editorial/innovation-discovery-skills-for-innovention-managers-524593

    [viii] James Thomas, ‘How the pandemic can change workplace culture for the better’ Strategy&, https://www.strategyand.pwc.com/m1/en/articles/2020/how-the-pandemic-can-change-workplace-culture-for-the-better.html

    [ix] Daniel Kost, ‘You’re Right! You Are Working Longer and Attending More Meetings,’ Harvard Business School, September 14th, 2020, https://hbswk.hbs.edu/item/you-re-right-you-are-working-longer-and-attending-more-meetings

    [x] Untitled, ‘Coronavirus: Twitter tells staff to work from home,’ BBC, March 3rd, 2020, https://www.bbc.com/news/business-51700937

    [xi] Jack Kelly, ‘Twitter CEO Jack Dorsey Tells Employees They Can Work From Home ‘Forever’—Before You Celebrate, There’s A Catch’, May 13th, 2020, https://www.forbes.com/sites/jackkelly/2020/05/13/twitter-ceo-jack-dorsey-tells-employees-they-can-work-from-home-forever-before-you-celebrate-theres-a-catch/?sh=32caf77a2e91

    [xii] ‘Daniel Goleman, ‘An EI-Based Theory of Performance’ Consortium for Research on Emotional Intelligence in Organisations, 2000, http://www.eiconsortium.org/reprints/ei_theory_performance.html