π TensorFlow Computational Graph
πΉ What is a Computational Graph?
A computational graph is a directed graph used to represent mathematical computations.
- Nodes (vertices) β Operations (like addition, multiplication, activation)
- Edges β Tensors (data flowing between operations)
π Simple idea:
Graph = Operations + Data flow
πΉ Components of the Graph
1. Nodes (Operations / Ops)
- Represent computations
-
Examples:
- Matrix multiplication (
matmul) - Addition
- Activation functions (ReLU, Sigmoid)
- Matrix multiplication (
π They take tensors as input and produce tensors as output
2. Edges (Tensors)
- Represent data flowing between nodes
-
Carry:
- Inputs
- Intermediate results
- Outputs
π Think: Edges = Data pipeline
πΉ How to Build a Computational Graph
Step 1: Define Operations
- Decide what computations you need (e.g., multiplication, loss calculation)
Step 2: Create Tensors
- Inputs
- Model parameters
- Intermediate values
Step 3: Connect Operations
- Link outputs of one operation to inputs of another
π This creates a graph structure
πΉ Example (Conceptual)
import tensorflow as tf
a = tf.constant([[1, 2], [3, 4]])
b = tf.constant([[5, 6], [7, 8]])
c = tf.matmul(a, b)
π Here:
-
a,bβ tensors (edges/data) -
matmulβ node (operation)
πΉ Execution of Graph (Sessions)
In TensorFlow 1.x style:
- Graph is built first
- Then executed using a session
with tf.compat.v1.Session() as sess:
result = sess.run(c)
print(result)
π Important:
- Graph = Blueprint π§
- Session = Execution π
πΉ Visualization (TensorBoard)
TensorFlow provides a tool called
TensorBoard
Use:
- Visualize graph structure
- Understand data flow
- Debug models
Example:
writer = tf.summary.FileWriter("logs/", graph=tf.compat.v1.get_default_graph())
writer.close()
Then run TensorBoard to view it.
πΉ Benefits of Computational Graph
β 1. Optimization
- TensorFlow optimizes execution
- Improves speed and memory usage
β 2. Portability
- Graph can be saved and reused
β 3. Debugging
- Visualization helps track errors
β 4. Parallelism
- Independent operations can run simultaneously
πΉ Important Note (Modern TensorFlow)
In TensorFlow 2.x:
- No need for explicit sessions
- Uses eager execution (default)
π Code runs immediately, like normal Python
But:
- Computational graph still exists internally (for optimization)
π― One-Line Summary (Exam Ready)
A TensorFlow computational graph is a directed graph where nodes represent operations and edges represent tensors, enabling efficient execution, optimization, and visualization of machine learning models.
β‘ What is Eager Execution (TensorFlow 2.x)?
π Eager Execution = βRun code immediatelyβ
- As soon as you write a line, it executes instantly
- Works like normal Python / NumPy
β Example:
import tensorflow as tf
x = tf.constant([1, 2, 3])
print(x)
π Output comes immediately
π§ Simple Understanding
Eager = βNo waiting, no graph building, just direct resultβ
π§© What is Graph Execution (TensorFlow 1.x)?
π Graph Execution = βBuild first, run laterβ
- First: create a computational graph (blueprint)
- Then: execute using a session
β Key Idea:
Code does NOT run immediately
π§ Simple Understanding
Graph = βPlan everything first, then executeβ
βοΈ Eager vs Graph Execution (Easy Comparison)
| Feature | Eager Execution β‘ | Graph Execution π |
|---|---|---|
| Execution | Immediate | Delayed |
| Debugging | Easy | Hard |
| Style | Like Python | Like building a model graph |
| Flexibility | High | Less |
| Performance | Slower (sometimes) | Faster (optimized) |
| Default | TF 2.x | TF 1.x |
π₯ Important Concept: @tf.function
TensorFlow 2.x gives both worlds π
π By default: Eager mode
π With @tf.function: Graph mode
Example:
import tensorflow as tf
@tf.function
def matmul(a, b):
return tf.matmul(a, b)
π What happens:
- Function is converted into a computational graph
- Runs faster (optimized)
π§ Simple Understanding
@tf.function= βConvert Python code β Graph for speedβ
π Why Use Graph Mode?
Even though eager is easy, graph mode is powerful:
β Benefits:
- Faster execution (optimization)
- Better for large models
- Can use XLA (Accelerated Linear Algebra) optimization
- Easy to save & deploy models
π― Final Exam Answer (Short)
Eager execution in TensorFlow 2.x executes operations immediately like normal Python, making debugging easy. In contrast, graph execution builds a computational graph first and executes it later in a session. TensorFlow 2.x uses eager execution by default but allows graph execution using
@tf.functionfor better performance and optimization.
π How to Visualize a TensorFlow Graph
The main tool is π TensorBoard
πΉ Method (TensorFlow 2.x)
You need to:
- Convert your function into a graph using
@tf.function - Log it
- Open TensorBoard
β Step-by-Step
1. Create a Graph Function
import tensorflow as tf
@tf.function
def my_func(a, b):
return tf.matmul(a, b)
2. Enable Logging
log_dir = "logs/graph"
writer = tf.summary.create_file_writer(log_dir)
tf.summary.trace_on(graph=True, profiler=False)
a = tf.constant([[1, 2]])
b = tf.constant([[3], [4]])
my_func(a, b)
with writer.as_default():
tf.summary.trace_export(
name="my_graph",
step=0,
profiler_outdir=log_dir
)
3. Run TensorBoard
Open terminal and run:
tensorboard --logdir=logs/graph
Then open browser:
π http://localhost:6006
π§ What Youβll See
- Nodes = operations (
MatMul, etc.) - Edges = tensors (data flow)
- Full computational graph visualization
π₯ Simple Understanding
TensorBoard = βGraph ka mapβ πΊοΈ
It shows how data flows inside your model
β οΈ Important Notes
- Without
@tf.function, graph wonβt appear properly - Eager execution does not create a visible static graph
- Graph is created only when tracing happens
π― One-Line Answer (Exam)
TensorFlow graphs can be visualized using TensorBoard by tracing a
@tf.functionand exporting it to log files, which are then displayed as a computational graph.
βοΈ How to Manage Graphs and Sessions in TensorFlow
πΉ 1. Managing Graphs
β a) Use Default Graph (simple cases)
- Just define operations normally
- TensorFlow automatically handles the graph
π Use when:
- Small programs
- No need for separation
β b) Create Separate Graphs (important)
g = tf.Graph()
with g.as_default():
a = tf.constant(2)
b = tf.constant(3)
c = tf.add(a, b)
π Why manage like this?
- Avoid mixing operations
- Run multiple models independently
β c) Switch Between Graphs
- Use
with graph.as_default() - Ensures operations go to the correct graph
π Think:
βControl where your computation is storedβ
πΉ 2. Managing Sessions
β a) Use Context Manager (BEST PRACTICE)
with tf.compat.v1.Session(graph=g) as sess:
result = sess.run(c)
π Automatically:
- Opens session
- Closes session
- Prevents memory leaks
β b) Manually Control Session
sess = tf.compat.v1.Session(graph=g)
result = sess.run(c)
sess.close()
π Use when:
- You need long-running sessions
β c) Configure Session
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.compat.v1.Session(config=config)
π Helps manage:
- GPU memory
- CPU threads
- Device usage
πΉ 3. Resource Management
β Always close sessions
- Frees memory
- Avoids crashes
β
Use with statement
- Cleaner
- Safer
πΉ 4. Key Strategy (Very Important)
π Good management means:
- Keep graphs separate and organized
- Run them in controlled sessions
- Release resources properly
π― Final Short Answer (Exam Ready)
Managing graphs and sessions in TensorFlow involves controlling where operations are added using graphs and executing them using sessions. Graphs can be managed by creating separate graphs and using
as_default()to organize computations. Sessions are managed by executing graphs usingrun(), configuring resources, and properly closing them using context managers to ensure efficient memory usage.
π How to Close a Session in TensorFlow
πΉ 1. Manually Close the Session
import tensorflow as tf
sess = tf.compat.v1.Session()
# do some work
sess.close()
π This releases memory and resources
πΉ 2. Best Practice (Recommended) β
Use a context manager (with statement):
import tensorflow as tf
with tf.compat.v1.Session() as sess:
result = sess.run(...)
π Automatically:
- Opens session
- Closes session after block ends
β No need to call
close()manually
π§ Simple Understanding
sess.close()= βFinish work and free memoryβ
with Session()= βAuto close, safer methodβ
β οΈ Important Notes
- Always close sessions in TensorFlow 1.x
-
Not closing can cause:
- Memory leaks
- GPU memory issues
π₯ TensorFlow 2.x Reality
π No need to close sessions
Because:
- Uses Eager Execution
- Sessions are mostly removed
π― Exam One-Liner
A TensorFlow session can be closed using
sess.close()or automatically using a context manager (with tf.Session()), which ensures proper resource management.
π Managing Graphs and Sessions in TensorFlow
Managing graphs and sessions is essential in TensorFlow for organizing computations and executing them efficiently.
πΉ 1. Managing Graphs
β What is a Graph?
A TensorFlow graph (computational graph) represents:
- Nodes β operations
- Edges β tensors (data flow)
π Simple idea:
Graph = Plan of computation
β Default Graph
- TensorFlow automatically creates a default graph
- All operations are added to it by default
a = tf.constant(2)
b = tf.constant(3)
c = tf.add(a, b)
β Multiple Graphs (Management)
You can create separate graphs for better control:
g = tf.Graph()
with g.as_default():
a = tf.constant(2)
b = tf.constant(3)
c = tf.add(a, b)
π Benefits:
- Avoid mixing operations
- Handle multiple models
- Better organization
πΉ 2. Managing Sessions
β What is a Session?
A session executes the graph.
π
Graph = Blueprint π§
Session = Execution π
β Running a Session
with tf.compat.v1.Session(graph=g) as sess:
result = sess.run(c)
print(result)
π run() executes the graph and gives output
πΉ 3. Closing Sessions & Resource Management
β Closing Session
sess.close()
π Meaning:
- You are manually ending the session
-
Frees:
- RAM
- GPU memory
π§ Why important?
If you donβt close:
- Memory keeps getting used β
- Program may slow down or crash β
β Better Way (Auto-close)
with tf.compat.v1.Session() as sess:
result = sess.run(c)
π Automatically closes β safer β
πΉ 2. Configuring Sessions
This part is about controlling how TensorFlow runs
β Step 1: Create Config
config = tf.compat.v1.ConfigProto()
π This creates a settings object
β Step 2: Set Options
config.log_device_placement = True
π Shows:
- Whether operations run on CPU or GPU
config.gpu_options.allow_growth = True
π Very important:
- GPU memory grows only when needed
- Prevents full memory usage at start
β Step 3: Use Config in Session
with tf.compat.v1.Session(config=config) as sess:
result = sess.run(c)
print("Result:", result)
π Now session runs with your custom settings
π§ Simple Summary
-
sess.close()β manually free memory -
with Session()β auto close (best) -
ConfigProtoβ control CPU/GPU & performance
π― One-Line Exam Answer
Sessions can be managed by closing them using
sess.close()to release resources, and configured usingtf.compat.v1.ConfigProto()to control execution settings like device placement and GPU memory usage.
π₯ Key Summary
- Graphs β organize computations
- Sessions β execute computations
- sess.close() β free resources
- with Session() β automatic management
- ConfigProto β control performance & hardware
β οΈ Important (Exam Point)
π In TensorFlow 2.x:
- Sessions are mostly removed
- Eager execution is default
- Graphs are handled internally
π― Final Exam Answer
In TensorFlow, graphs are used to represent computations as a series of operations and data flow, while sessions are used to execute these graphs. By default, operations are added to a default graph, but multiple graphs can be created using
tf.Graph()for better management. Sessions execute graphs usingrun()and should be properly closed usingsess.close()or managed using context managers to release resources. Additionally, sessions can be configured usingtf.compat.v1.ConfigProto()to control device placement and memory usage, ensuring efficient execution.
This is a comprehensive guide to TensorFlow Computational Graphs, Execution Modes, and Resource Management, formatted for maximum readability and exam preparation.
π§ TensorFlow: Computational Graphs & Session Management
π 1. The Computational Graph
A computational graph is a directed graph used to represent mathematical computations. It serves as the "blueprint" of a model.
- Nodes (Vertices): Represent Operations (e.g., addition, matrix multiplication, activation functions).
- Edges: Represent Tensors (the data flowing between operations).
Concept: Graph = Operations + Data Flow
Key Components:
- Nodes (Ops): Take tensors as input and produce tensors as output.
- Edges (Tensors): Act as the data pipeline carrying inputs and intermediate results.
β‘ 2. Eager vs. Graph Execution
TensorFlow offers two primary ways to execute code:
| Feature | Eager Execution (TF 2.x) β‘ | Graph Execution (TF 1.x) π |
|---|---|---|
| Execution | Immediate (runs line-by-line) | Delayed (build first, run later) |
| Debugging | Easy (standard Python tools) | Hard (requires session tracing) |
| Performance | Slower (overhead) | Faster (highly optimized) |
| Default | Yes (in modern TensorFlow) | No (manual setup required) |
The Best of Both Worlds: @tf.function
In TensorFlow 2.x, you can use the @tf.function decorator to convert a Python function into a high-performance static graph.
@tf.function
def fast_multiply(a, b):
return tf.matmul(a, b)
ποΈ 3. Managing Graphs & Sessions
In older versions (or compatibility mode), managing resources is crucial to prevent memory leaks.
Managing Graphs
- Default Graph: Created automatically by TF.
-
Custom Graphs: Used to run multiple independent models.
g = tf.Graph() with g.as_default(): # Ops defined here belong to graph 'g' a = tf.constant(5)
Managing Sessions
A Session is the environment where the graph is actually executed.
- Manual Close: Use
sess.close()to free RAM and GPU memory. -
Context Manager (Best Practice): Use the
withstatement to ensure the session closes automatically.
with tf.compat.v1.Session() as sess: result = sess.run(my_op) # Session is automatically closed here
βοΈ 4. Session Configuration (ConfigProto)
You can control how TensorFlow interacts with your hardware using ConfigProto.
-
Device Placement:
log_device_placement = Trueshows if ops are on CPU or GPU. -
GPU Memory Growth:
allow_growth = Trueprevents TF from hogging all available VRAM immediately.
config = tf.compat.v1.ConfigProto() config.gpu_options.allow_growth = True sess = tf.compat.v1.Session(config=config)
π 5. Visualization with TensorBoard
TensorBoard is the "Map" of your model. To visualize a graph:
- Use
@tf.functionto define the logic. - Use
tf.summary.trace_onto record the graph. - Export the trace to a log directory.
- Launch via terminal:
tensorboard --logdir=logs/.
π― Exam-Ready Summaries
Q: What is a Computational Graph?
A directed graph where nodes represent operations and edges represent tensors, enabling efficient execution and optimization of ML models.
Q: Why use a Context Manager (with statement) for sessions?
It ensures that resources (RAM/GPU) are automatically released by closing the session as soon as the code block finishes, preventing memory leaks.
Q: How does TF 2.x handle graphs differently than TF 1.x?
TF 2.x uses Eager Execution by default for a better user experience, whereas TF 1.x required building a static graph and running it inside a
Session.

Top comments (0)