View | Details | Raw Unified | Return to bug 1043231
Collapse All | Expand All

(-)a/Documentation/dmaengine/client.txt (-2 / +36 lines)
Lines 117-123 The slave DMA usage consists of following steps: Link Here
117
	transaction.
117
	transaction.
118
118
119
	For cyclic DMA, a callback function may wish to terminate the
119
	For cyclic DMA, a callback function may wish to terminate the
120
	DMA via dmaengine_terminate_all().
120
	DMA via dmaengine_terminate_async().
121
121
122
	Therefore, it is important that DMA engine drivers drop any
122
	Therefore, it is important that DMA engine drivers drop any
123
	locks before calling the callback function which may cause a
123
	locks before calling the callback function which may cause a
Lines 155-166 The slave DMA usage consists of following steps: Link Here
155
155
156
Further APIs:
156
Further APIs:
157
157
158
1. int dmaengine_terminate_all(struct dma_chan *chan)
158
1. int dmaengine_terminate_sync(struct dma_chan *chan)
159
   int dmaengine_terminate_async(struct dma_chan *chan)
160
   int dmaengine_terminate_all(struct dma_chan *chan) /* DEPRECATED */
159
161
160
   This causes all activity for the DMA channel to be stopped, and may
162
   This causes all activity for the DMA channel to be stopped, and may
161
   discard data in the DMA FIFO which hasn't been fully transferred.
163
   discard data in the DMA FIFO which hasn't been fully transferred.
162
   No callback functions will be called for any incomplete transfers.
164
   No callback functions will be called for any incomplete transfers.
163
165
166
   Two variants of this function are available.
167
168
   dmaengine_terminate_async() might not wait until the DMA has been fully
169
   stopped or until any running complete callbacks have finished. But it is
170
   possible to call dmaengine_terminate_async() from atomic context or from
171
   within a complete callback. dmaengine_synchronize() must be called before it
172
   is safe to free the memory accessed by the DMA transfer or free resources
173
   accessed from within the complete callback.
174
175
   dmaengine_terminate_sync() will wait for the transfer and any running
176
   complete callbacks to finish before it returns. But the function must not be
177
   called from atomic context or from within a complete callback.
178
179
   dmaengine_terminate_all() is deprecated and should not be used in new code.
180
164
2. int dmaengine_pause(struct dma_chan *chan)
181
2. int dmaengine_pause(struct dma_chan *chan)
165
182
166
   This pauses activity on the DMA channel without data loss.
183
   This pauses activity on the DMA channel without data loss.
Lines 186-188 Further APIs: Link Here
186
	a running DMA channel.  It is recommended that DMA engine users
203
	a running DMA channel.  It is recommended that DMA engine users
187
	pause or stop (via dmaengine_terminate_all()) the channel before
204
	pause or stop (via dmaengine_terminate_all()) the channel before
188
	using this API.
205
	using this API.
206
207
5. void dmaengine_synchronize(struct dma_chan *chan)
208
209
  Synchronize the termination of the DMA channel to the current context.
210
211
  This function should be used after dmaengine_terminate_async() to synchronize
212
  the termination of the DMA channel to the current context. The function will
213
  wait for the transfer and any running complete callbacks to finish before it
214
  returns.
215
216
  If dmaengine_terminate_async() is used to stop the DMA channel this function
217
  must be called before it is safe to free memory accessed by previously
218
  submitted descriptors or to free any resources accessed within the complete
219
  callback of previously submitted descriptors.
220
221
  The behavior of this function is undefined if dma_async_issue_pending() has
222
  been called between dmaengine_terminate_async() and this function.
(-)a/Documentation/dmaengine/provider.txt (-2 / +18 lines)
Lines 327-334 supported. Link Here
327
327
328
   * device_terminate_all
328
   * device_terminate_all
329
     - Aborts all the pending and ongoing transfers on the channel
329
     - Aborts all the pending and ongoing transfers on the channel
330
     - This command should operate synchronously on the channel,
330
     - For aborted transfers the complete callback should not be called
331
       terminating right away all the channels
331
     - Can be called from atomic context or from within a complete
332
       callback of a descriptor. Must not sleep. Drivers must be able
333
       to handle this correctly.
334
     - Termination may be asynchronous. The driver does not have to
335
       wait until the currently active transfer has completely stopped.
336
       See device_synchronize.
337
338
   * device_synchronize
339
     - Must synchronize the termination of a channel to the current
340
       context.
341
     - Must make sure that memory for previously submitted
342
       descriptors is no longer accessed by the DMA controller.
343
     - Must make sure that all complete callbacks for previously
344
       submitted descriptors have finished running and none are
345
       scheduled to run.
346
     - May sleep.
347
332
348
333
Misc notes (stuff that should be documented, but don't really know
349
Misc notes (stuff that should be documented, but don't really know
334
where to put them)
350
where to put them)
(-)a/drivers/dma/dmaengine.c (-1 / +4 lines)
Lines 266-273 static void dma_chan_put(struct dma_chan *chan) Link Here
266
	module_put(dma_chan_to_owner(chan));
266
	module_put(dma_chan_to_owner(chan));
267
267
268
	/* This channel is not in use anymore, free it */
268
	/* This channel is not in use anymore, free it */
269
	if (!chan->client_count && chan->device->device_free_chan_resources)
269
	if (!chan->client_count && chan->device->device_free_chan_resources) {
270
		/* Make sure all operations have completed */
271
		dmaengine_synchronize(chan);
270
		chan->device->device_free_chan_resources(chan);
272
		chan->device->device_free_chan_resources(chan);
273
	}
271
274
272
	/* If the channel is used via a DMA request router, free the mapping */
275
	/* If the channel is used via a DMA request router, free the mapping */
273
	if (chan->router && chan->router->route_free) {
276
	if (chan->router && chan->router->route_free) {
(-)a/include/linux/dmaengine.h (-1 / +90 lines)
Lines 697-702 struct dma_filter { Link Here
697
 *	paused. Returns 0 or an error code
697
 *	paused. Returns 0 or an error code
698
 * @device_terminate_all: Aborts all transfers on a channel. Returns 0
698
 * @device_terminate_all: Aborts all transfers on a channel. Returns 0
699
 *	or an error code
699
 *	or an error code
700
 * @device_synchronize: Synchronizes the termination of a transfers to the
701
 *  current context.
700
 * @device_tx_status: poll for transaction completion, the optional
702
 * @device_tx_status: poll for transaction completion, the optional
701
 *	txstate parameter can be supplied with a pointer to get a
703
 *	txstate parameter can be supplied with a pointer to get a
702
 *	struct with auxiliary transfer status information, otherwise the call
704
 *	struct with auxiliary transfer status information, otherwise the call
Lines 781-786 struct dma_device { Link Here
781
	int (*device_pause)(struct dma_chan *chan);
783
	int (*device_pause)(struct dma_chan *chan);
782
	int (*device_resume)(struct dma_chan *chan);
784
	int (*device_resume)(struct dma_chan *chan);
783
	int (*device_terminate_all)(struct dma_chan *chan);
785
	int (*device_terminate_all)(struct dma_chan *chan);
786
	void (*device_synchronize)(struct dma_chan *chan);
784
787
785
	enum dma_status (*device_tx_status)(struct dma_chan *chan,
788
	enum dma_status (*device_tx_status)(struct dma_chan *chan,
786
					    dma_cookie_t cookie,
789
					    dma_cookie_t cookie,
Lines 872-877 static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_sg( Link Here
872
			src_sg, src_nents, flags);
875
			src_sg, src_nents, flags);
873
}
876
}
874
877
878
/**
879
 * dmaengine_terminate_all() - Terminate all active DMA transfers
880
 * @chan: The channel for which to terminate the transfers
881
 *
882
 * This function is DEPRECATED use either dmaengine_terminate_sync() or
883
 * dmaengine_terminate_async() instead.
884
 */
875
static inline int dmaengine_terminate_all(struct dma_chan *chan)
885
static inline int dmaengine_terminate_all(struct dma_chan *chan)
876
{
886
{
877
	if (chan->device->device_terminate_all)
887
	if (chan->device->device_terminate_all)
Lines 880-885 static inline int dmaengine_terminate_all(struct dma_chan *chan) Link Here
880
	return -ENOSYS;
890
	return -ENOSYS;
881
}
891
}
882
892
893
/**
894
 * dmaengine_terminate_async() - Terminate all active DMA transfers
895
 * @chan: The channel for which to terminate the transfers
896
 *
897
 * Calling this function will terminate all active and pending descriptors
898
 * that have previously been submitted to the channel. It is not guaranteed
899
 * though that the transfer for the active descriptor has stopped when the
900
 * function returns. Furthermore it is possible the complete callback of a
901
 * submitted transfer is still running when this function returns.
902
 *
903
 * dmaengine_synchronize() needs to be called before it is safe to free
904
 * any memory that is accessed by previously submitted descriptors or before
905
 * freeing any resources accessed from within the completion callback of any
906
 * perviously submitted descriptors.
907
 *
908
 * This function can be called from atomic context as well as from within a
909
 * complete callback of a descriptor submitted on the same channel.
910
 *
911
 * If none of the two conditions above apply consider using
912
 * dmaengine_terminate_sync() instead.
913
 */
914
static inline int dmaengine_terminate_async(struct dma_chan *chan)
915
{
916
	if (chan->device->device_terminate_all)
917
		return chan->device->device_terminate_all(chan);
918
919
	return -EINVAL;
920
}
921
922
/**
923
 * dmaengine_synchronize() - Synchronize DMA channel termination
924
 * @chan: The channel to synchronize
925
 *
926
 * Synchronizes to the DMA channel termination to the current context. When this
927
 * function returns it is guaranteed that all transfers for previously issued
928
 * descriptors have stopped and and it is safe to free the memory assoicated
929
 * with them. Furthermore it is guaranteed that all complete callback functions
930
 * for a previously submitted descriptor have finished running and it is safe to
931
 * free resources accessed from within the complete callbacks.
932
 *
933
 * The behavior of this function is undefined if dma_async_issue_pending() has
934
 * been called between dmaengine_terminate_async() and this function.
935
 *
936
 * This function must only be called from non-atomic context and must not be
937
 * called from within a complete callback of a descriptor submitted on the same
938
 * channel.
939
 */
940
static inline void dmaengine_synchronize(struct dma_chan *chan)
941
{
942
	if (chan->device->device_synchronize)
943
		chan->device->device_synchronize(chan);
944
}
945
946
/**
947
 * dmaengine_terminate_sync() - Terminate all active DMA transfers
948
 * @chan: The channel for which to terminate the transfers
949
 *
950
 * Calling this function will terminate all active and pending transfers
951
 * that have previously been submitted to the channel. It is similar to
952
 * dmaengine_terminate_async() but guarantees that the DMA transfer has actually
953
 * stopped and that all complete callbacks have finished running when the
954
 * function returns.
955
 *
956
 * This function must only be called from non-atomic context and must not be
957
 * called from within a complete callback of a descriptor submitted on the same
958
 * channel.
959
 */
960
static inline int dmaengine_terminate_sync(struct dma_chan *chan)
961
{
962
	int ret;
963
964
	ret = dmaengine_terminate_async(chan);
965
	if (ret)
966
		return ret;
967
968
	dmaengine_synchronize(chan);
969
970
	return 0;
971
}
972
883
static inline int dmaengine_pause(struct dma_chan *chan)
973
static inline int dmaengine_pause(struct dma_chan *chan)
884
{
974
{
885
	if (chan->device->device_pause)
975
	if (chan->device->device_pause)
886
- 

Return to bug 1043231