Smart Clipboard: Building a simple Python script to bring Generative AI to the clipboard

Computers
Engineering
Author

Eliott Kalfon

Published

July 14, 2025

The Idea

Travelling on a train on a Friday or walking through the office of a company’s headquarters, it is common to see the same workflow happening over and over: copy-pasting information from a window into the ChatGPT (or other) interface, generating a response and pasting the output back into the original window. Is there a better way?

People who work with me know this: I do not like repetitive manual effort and I do not like complexity.

Within an hour and some trial and error, I had the following application running:

Tool Demo

The idea is to send the content of your clipboard (the “copied” data, see note below) to a GenAI model, and have the model response stored directly in the clipboard, ready to be pasted. This way, no need to go to another window and deal with another interface. This should save both time and brain processing power.

What is the clipboard?

In modern computers, the “clipboard” is a feature that temporarily stores the data users “copy” by pressing Cmd + C or Ctrl + C (depending on your operating system). This data can later be “pasted” using the keyboard shortcut Cmd + V or Ctrl + V.

With tools like Cursor or Copilot, this problem of window switching has mostly been solved for code development. This has resulted in significant productivity gains.

Python Implementation

The full code (around 100 lines) and instructions can be found in this repository.

The programme has four main components:

  • Listening to a keyboard shortcut
  • Accessing the data in the clipboard with Python
  • Sending this data to an LLM
  • Placing the response in the clipboard
  • Notifying the user

Listening to a Keyboard Shortcut

There are many ways to do this: I used pynput, which still required a bit of generated boilerplate code. There must be a better solution, but I am not an expert in using Python to interact with computer controls in general.

import os
import sys
import threading
from pynput import keyboard as pynput_keyboard

HOTKEY = os.getenv('HOTKEY', '<ctrl>+<shift>+g')
MODEL = os.getenv('OPENAI_MODEL', 'gpt-4')

def on_activate():
   threading.Thread(target=process_clipboard).start()

def for_canonical(hotkey):
   def inner(key):
       try:
           hotkey.press(key)
       except AttributeError:
           pass
   return inner

def for_release(hotkey):
   def inner(key):
       try:
           hotkey.release(key)
       except AttributeError:
           pass
   return inner

def listen_hotkey():
   try:
       hotkey = pynput_keyboard.HotKey(
           pynput_keyboard.HotKey.parse(HOTKEY),
           on_activate
       )
       with pynput_keyboard.Listener(
               on_press=for_canonical(hotkey),
               on_release=for_release(hotkey)) as listener:
           print(f"GPT Clipboard active! Press {HOTKEY} to process clipboard. Press Ctrl+C to quit.")
           print(f"Using model: {MODEL}")
           listener.join()
   except KeyboardInterrupt:
       print("\nGPT Clipboard stopped.")
       sys.exit(0)

The general idea is simple though: once a hotkey (keyboard shortcut, in my case Ctrl + Shift + G) is pressed, start a thread running the process_clipboard function.

But what is a thread? A thread is short for thread of execution; it is a part of a programme 1.

In the context of this application, the main thread listens to keyboard shortcuts. Once the hotkey is pressed on the keyboard, it starts another thread, or programme part. This part of the programme will run independently of the hotkey listener.

If this was executed sequentially, the programme would not be able to both listen to hotkey and send the request to an LLM. This is an example of concurrency.

To do these two things at the same time, the CPU iterates very quickly between threads. Each programme on your machine is running many of these threads simultaneously.

Manipulating Clipboard Data

I used the pyperclip package to access and place data in the clipboard with the paste() and copy() methods. There was nothing more to it.

def process_clipboard():
   start_time = time.time()
   try:
       # Get clipboard content
       input_text = pyperclip.paste()
       if not input_text.strip():
           show_notification("GPT Clipboard", "Clipboard is empty!")
           return
      
       # Create OpenAI client with API key
       client = OpenAI(api_key=API_KEY)
      
       # Show processing notification
       show_notification("GPT Clipboard", f"Processing with {MODEL}...")
      
       # Send to GPT
       response = client.chat.completions.create(
           model=MODEL,
           messages=[
               {"role": "user", "content": input_text}
           ],
           max_tokens=MAX_TOKENS
       )
      
       output_text = response.choices[0].message.content.strip()
      
       # Copy result to clipboard
       pyperclip.copy(output_text)

User Notifications

Sending user notifications was much easier than I expected. The only issue is that I had no way to test this for other operating systems. I went for a simple MacOS Darwin notification, defaulting to a simple print.

   system = platform.system()
  
   if system == "Darwin":  # macOS
       os.system(f"""
       osascript -e 'display notification "{message}" with title "{title}"'
       """)
   else:  # Linux or other
       print(f"{title}: {message}")

Further work could include a more robust or cross-platform way to send notifications.

Ideas for improvement

While the current state of the programme allows seamless integration with any type of writing tasks, it does not allow the user to add any prompt to the clipboard data.

If you wanted to summarise the idea of a paragraph in a PDF, you would need more than simply copying the paragraph and sending it to ChatGPT. I thought of creating a second keyboard shortcut that would open a pop-up window to prompt the user to add a prompt.

In addition to adding complexity, this seemed to defeat the original purpose of the solution—built to avoid window switching.

On a technical standpoint, this is still a proof of concept. It would need a bit of work to be made user-ready. Some modifications that could be made:

  • Allow user to pick LLM or provider
  • Enable local LLMs
  • Build user notifications for different Operating Systems

Final Thoughts

I hope you found this tool useful and the idea interesting. If you think that any of it can be improved, let me know in the comments or make your own pull request on Github.

For more articles like this, subscribe to my newsletter!

Footnotes

  1. “Thread (computer science), Wikipedia” https://simple.wikipedia.org/wiki/Thread_(computer_science), Accessed: 2025-07-14.↩︎

Like what you read? Subscribe to my newsletter to hear about my latest posts!